Category Archives: Philosophy

Performance Based Funding & Adjustments


Photo by Paula O'Neil

Photo by Paula O’Neil

I have written numerous essays on the issue of performance based funding of Florida state universities. This essay adds to the stack by addressing the matter of adjusting the assessment on the basis of impediments. I will begin, as I so often do, with a running analogy.

This coming Thursday is Thanksgiving and I will, as I have for the past few decades, run the Tallahassee Turkey Trot. By ancient law, the more miles you run on Thanksgiving, the more pumpkin pie and turkey you can stuff into your pie port. This is good science.

Back in the day, people wanted me to be on their Turkey Trot team because I was (relatively) fast. These days, I am asked to be on a team because I am (relatively) old but still (relatively) mobile.  As to why age and not just speed would be important in team selection, the answer is that the team scoring involves the use of an age grade calculator. While there is some debate about the accuracy of the calculators, the basic idea is sound: the impact of aging on performance can be taken into account in order to “level the playing field” (or “running road”) so as to allow fair comparisons and assessments of performance between people of different ages.

Suppose, for example, I wanted to compare my performance as a 49 year old runner relative to a young man (perhaps my younger and much faster self). The most obvious way to do this is to simply compare our times in the same race and this would be a legitimate comparison. If I ran the 5K in 20 minutes and the young fellow ran it in 19 minutes, he would have performed better than I did. However, if a fair comparison were desired, then the effect of aging should be taken into account—after all, as I like to say, I am dragging the weight of many more years.  Using an age grade calculator, my 20 minute 5K would be age adjusted to be equivalent to a 17:45 run by a young man. As such, I would have performed better than the young fellow given the temporal challenge I faced.

While assessing running times is different from assessing the performance of a university, the situations do seem similar in relevant ways. To be specific, the goal is to assess performance and to do so fairly. In the case of running, measuring the performance can be done by using only the overall times, but this does not truly measure the performance in terms of how well each runner has done in regards to the key challenge of age. Likewise, universities could be compared in terms of the unadjusted numbers, but this would not provide a fair basis for measuring performance without considering the key challenges faced by each university.

As I have mentioned in previous essays, my university, Florida A&M University, has fared poorly under the state’s assessment system. As with using just the actual times from a race, this assessment is a fair evaluation given the standards. My university really is doing worse than the other schools, given the assigned categories and the way the results are calculated. However, Florida A&M University (and other schools) face challenges that the top ranked schools do not face (or do not face to the same degree). As such, a truly fair assessment of the performance of the schools would need to employ something analogous to the age graded calculations.

As noted in another essay, Florida A&M University is well ranked in terms of its contribution to social mobility. One reason for this is that the majority of Florida A&M University students are low-income students and the school does reasonable well at helping them move up. However, lower income students face numerous challenged that would lower their chances of graduation and success. These factors include the fact that students from poor schools (which tend to be located in economically disadvantaged areas) will tend to be poorly prepared for college.  Another factor is that poverty negatively impacts brain development as well as academic performance. There is also the obvious fact that disadvantaged students need to borrow more money than students from wealthier backgrounds. This entails more student debt and seventy percent of African American students say that student debt is their main reason for dropping out. In contrast, less than fifty percent of white students make this claim.

Given the impediments faced by lower income students, the assessment of university performance should be economically graded—that is, there should be an adjustment that compensates for the negative effect of the economic disadvantages of the students. Without this, the performance of the university cannot be properly assessed. Even though a university’s overall numbers might be lower than other schools, the school’s actual performance in terms of what it is doing for its students might be quite good.

In addition to the economic factors, there is also the factor of racism (which is also intertwined with economics). As I have mentioned in prior essays, African-American students are still often victims of segregation in regards to K-12 education and receive generally inferior education relative to white students. This clearly will impact college performance.

Race is also a major factor in regards to economic success. As noted in a previous essay, people with white sounding names are more likely to get interviews and call backs. For whites, the unemployment rate is 5.3% and it is 11.4% for blacks.  The poverty rate for whites is 9.7% while that for blacks it is 27.2%. The median household wealth for whites is $91,405 and for blacks $6,446. Blacks own homes at a rate of 43.5% while whites do so at 72.9%. Median household income is $35,416 for blacks and $59,754 for whites.  Since many of the factors used to assess Florida state universities use economic and performance factors that are impacted by the effects of racism, fairness would require that there be a racism graded calculation. This would factor in how the impact of racism lowers the academic and economic success of black college graduates, thus allowing an accurate measure of the performance of Florida A&M University and other schools. Without such adjustments, there is no clear measure of how the schools actually are performing.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Refugees & Terrorists

In response to the recent terrorist attack in Paris (but presumably not those outside the West, such as in Beirut) many governors have stated they will try to prevent the relocation of Syrian refugees into their states. These states include my home state of Maine, my university state of Ohio and my adopted state of Florida. Recognizing a chance to score political points, some Republican presidential candidates have expressed their opposition to allowing more Syrian refugees into the country. Some, such as Ted Cruz, have proposed a religious test for entry into the country: Christian refugees would be allowed, while Muslim refugees would be turned away.

On the one hand, it is tempting to dismiss this as mere political posturing and pandering to fear, racism and religious intolerance. On the other hand, it is worth considering the legitimate worries that lie under the posturing and the pandering. One worry is, of course, the possibility that terrorists could masquerade as refugees to enter the country. Another worry is that refugees who are not already terrorists might be radicalized and become terrorists.

In matters of politics, it is rather unusual for people to operate on the basis of consistently held principles. Instead, views tend to be held on the basis of how a person feels about a specific matter or what the person thinks about the political value of taking a specific position. However, a proper moral assessment requires considering the matter in terms of general principles and consistency.

In the case of the refugees, the general principle justifying excluding them would be something like this: it is morally acceptable to exclude from a state groups who include people who might pose a threat. This principle seems, in general, quite reasonable. After all, excluding people who might present a threat serves to protect people from harm.

Of course, this principle is incredibly broad and would justify excluding almost anyone and everyone. After all, nearly every group of people (tourists, refugees, out-of-staters, men, Christians, atheists, cat fanciers, football players, and so on) include people who might pose a threat.  While excluding everyone would increase safety, it would certainly make for a rather empty state. As such, this general principle should be subject to some additional refinement in terms of such factors as the odds that a dangerous person will be in the group in question, the harm such a person is likely to do, and the likely harms from excluding such people.

As noted above, the concern about refugees from Syria (and the Middle East) is that they might include terrorists or terrorists to be. One factor to consider is the odds that this will occur. The United States has a fairly extensive (and slow) vetting process for refugees and, as such, it is not surprising that of “745,000 refugees resettled since September 11th, only two Iraqis in Kentucky have been arrested on terrorist charges, for aiding al-Qaeda in Iraq.”  This indicates that although the chance of a terrorist arriving masquerading as a refugee is not zero, it is exceptionally unlikely.

It might be countered, using the usual hyperbolic rhetoric of such things, that if even one terrorist gets into the United States, that would be an intolerable disaster. While I do agree that this would be a bad thing, there is the matter of general principles. In this case, would it be reasonable to operate on a principle that the possibility of even one bad outcome is sufficient to warrant a broad ban on something? That, I would contend, would generally seem to be unreasonable. This principle would justify banning guns, nuts, cars and almost all other things. It would also justify banning tourists and visitors from other states. After all, tourists and people from other states do bad things in states from time to time. As such, this principle seems unreasonable.

There is, of course, the matter of the political risk. A politician who supports allowing refugees to come into her state will be vilified by certain pundits and a certain news outlet if even a single incident happens. This, of course, would be no more reasonable than vilifying a politician who supports the second amendment just because a person is wrongly shot in her state.  But, reason is usually absent in the realm of political punditry.

Another factor to consider is the harm that would be done by excluding such refugees. If they cannot be settled someplace, they will be condemned to live as involuntary nomads and suffer all that entails. There is also the ironic possibility that such excluded refugees will become, as pundits like to say, radicalized. After all, people who are deprived of hope and who are treated as pariahs tend to become a bit resentful and some might decide to actually become terrorists. There is also the fact that banning refugees provides a nice bit of propaganda for the terrorist groups.

Given that the risk is very small and the harm to the refugees would be significant, the moral thing to do is to allow the refugees into the United States. Yes, one of them could be a terrorist. But so could a tourist. Or some American coming from another state. Or already in the state.

In addition to the sort of utilitarian calculation just made, an argument can also be advanced on the basis of moral duties to others, even when acting on such a duty involves risk. In terms of religious-based ethics, a standard principle is to love thy neighbor as thyself, which would seem to require that the refugees be aided, even at a slight risk. There is also the golden rule: if the United States fell into chaos and war, Americans fleeing the carnage would want other people to help them. Even though we Americans have a reputation for violence. As such, we need to accept refugees.

As a closing point, we Americans love to make claims about the moral superiority and exceptionalism of our country. Talk is cheap, so if we want to prove our alleged superiority and exceptionalism, we have to act in an exceptional way. Refusing to help people out of fear is to show a lack of charity, compassion and courage. This is not what an exceptional nation would do.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Left’s Defection from Progress

Note: This is a slightly abridged (but otherwise largely warts and all) version of an article that I had published in Quadrant magazine in April 1999. It has not previously been published online (except that I am cross-posting on my own blog, Metamagician and the Hellfire Club). While my views have developed somewhat in the interim, there may be some advantage in republishing it for a new audience, especially at a time when there is much discussion of a “regressive left”.


In a recent mini-review of David Stove’s Anything Goes: Origins of Scientific Irrationalism (originally published in 1982 as Popper and After), Diane Carlyle and Nick Walker make a casual reference to Stove’s “reactionary polemic”. By contrast, they refer to the philosophies of science that Stove attacks as “progressive notions of culture-based scientific knowledge”. To say the least, this appears tendentious.

To be fair, Carlyle and Walker end up saying some favourable things about Stove’s book. What is nonetheless alarming about their review is that it evidences just how easy it has become to write as if scientific realism were inherently “reactionary” and the more or less relativist views of scientific knowledge that predominate among social scientists and humanities scholars were “progressive”.

The words “reactionary” and “progressive” usually attach themselves to political and social movements, some kind of traditionalist or conservative backlash versus an attempt to advance political liberties or social equality. Perhaps Carlyle and Walker had another sense in mind, but the connotations of their words are pretty inescapable. Moreover, they would know as well as I do that there is now a prevalent equation within the social sciences and humanities of relativist conceptions of truth and reality with left-wing social critique, and of scientific realism with the political right. Carlyle and Walker wrote their piece against that background. But where does it leave those of us who retain at least a temperamental attachment to the left, however nebulous that concept is becoming, while remaining committed to scientific realism? To adapt a phrase from Christina Hoff Sommers, we are entitled to ask about who has been stealing socially liberal thought in general.

Is the life of reason and liberty (intellectual and otherwise) that some people currently enjoy in some countries no more than an historical anomaly, a short-lived bubble that will soon burst? It may well appear so. Observe the dreadful credulity of the general public in relation to mysticism, magic and pseudoscience, and the same public’s preponderant ignorance of genuine science. Factor in the lowbrow popularity of religious fundamentalism and the anti-scientific rantings of highbrow conservatives such as Bryan Appleyard. Yet the sharpest goad to despair is the appearance that what passes for the intellectual and artistic left has now repudiated the Enlightenment project of conjoined scientific and social progress.

Many theorists in the social sciences and humanities appear obsessed with dismantling the entirety of post-Enlightenment political, philosophical and scientific thought. This is imagined to be a progressive act, desirable to promote the various social, cultural and other causes that have become politically urgent in recent decades, particularly those associated with sex, race, and the aftermath of colonialism. The positions on these latter issues taken by university-based theorists give them a claim to belong to, if not actually constitute, the “academic left”, and I’ll refer to them with this shorthand expression.

There is, however, nothing inherently left-wing about wishing to sweep away our Enlightenment legacy. Nor is a commitment to scientific inquiry and hard philosophical analysis inconsistent with socially liberal views. Abandonment of the project of rational inquiry, with its cross-checking of knowledge in different fields, merely opens the door to the worst kind of politics that the historical left could imagine, for the alternative is that “truth” be determined by whoever, at particular times and places, possesses sufficient political or rhetorical power to decide what beliefs are orthodox. The rationality of our society is at stake, but so is the fate of the left itself, if it is so foolish as to abandon the standards of reason for something more like a brute contest for power.

It is difficult to know where to start in criticising the academic left’s contribution to our society’s anti-rationalist drift. The approaches I am gesturing towards are diverse among themselves, as well as being professed in the universities side by side with more traditional methods of analysing society and culture. There is considerable useful dialogue among all these approaches, and it can be difficult obtaining an accurate idea of specific influences within the general intellectual milieu.

However, amidst all the intellectual currents and cross-currents, it is possible to find something of a common element in the thesis or assumption (sometimes one, sometimes the other) that reality, or our knowledge of it, is “socially constructed”. There are many things this might mean, and I explain below why I do not quarrel with them all.

In the extreme, however, our conceptions of reality, truth and knowledge are relativised, leading to absurd doctrines, such as the repudiation of deductive logic or the denial of a mind-independent world. Symptomatic of the approach I am condemning is a subordination of the intellectual quest for knowledge and understanding to political and social advocacy. Some writers are prepared to misrepresent mathematical and scientific findings for the purposes of point scoring or intellectual play, or the simple pleasure of ego-strutting. All this is antithetical to Enlightenment values, but so much – it might be said – for the Enlightenment.


The notion that reality is socially constructed would be attractive and defensible if it were restricted to a thesis about the considerable historical contingency of any culture’s social practices and mores, and its systems of belief, understanding and evaluation. These are, indeed, shaped partly by the way they co-evolve and “fit” with each other, and by the culture’s underlying economic and other material circumstances.

The body of beliefs available to anyone will be constrained by the circumstances of her culture, including its attitude to free inquiry, the concepts it has already built up for understanding the world, and its available technologies for the gathering of data. Though Stove is surely correct to emphasise that the accumulation of empirical knowledge since the 17th century has been genuine, the directions taken by science have been influenced by pre-existing values and beliefs. Meanwhile, social practices, metaphysical and ethical (rather than empirical) beliefs, the methods by which society is organised and by which human beings understand their experience are none of them determined in any simple, direct or uniform way by human “nature” or biology, or by transcendental events.

So far, so good – but none of this is to suggest that all of these categories should or can be treated in exactly the same way. Take the domain of metaphysical questions. Philosophers working in metaphysics are concerned to understand such fundamentals as space, time, causation, the kinds of substances that ultimately exist, the nature of consciousness and the self. The answers cannot simply be “read off” our access to empirical data or our most fundamental scientific theories, or some body of transcendental knowledge. Nonetheless, I am content to assume that all these questions, however intractable we find them, have correct answers.

The case of ethical disagreement may be very different, and I discuss it in more detail below. It may be that widespread and deep ethical disagreement actually evidences the correctness of a particular metaphysical (and meta-ethical) theory – that there are no objectively existing properties of moral good and evil. Yet, to the extent that they depend upon empirical beliefs about the consequences of human conduct, practical moral judgements may often be reconcilable. Your attitude to the rights of homosexuals will differ from mine if yours is based on a belief that homosexual acts cause earthquakes.

Again, the social practices of historical societies may turn out to be constrained by our biology in a way that is not true of the ultimate answers to questions of metaphysics. All these are areas where human behaviour and belief may be shaped by material circumstances and the way they fit with each other, and relatively unconstrained by empirical knowledge. But, to repeat, they are not all the same.

Where this appears to lead us is that, for complicated reasons and in awkward ways, there is much about the practices and beliefs of different cultures that is contingent on history. In particular, the way institutions are built up around experience is more or less historically contingent, dependent largely upon economic and environmental circumstances and on earlier or co-evolving layers of political and social structures. Much of our activity as human beings in the realms of understanding, organising, valuing and responding to experience can reasonably be described as “socially constructed”, and it will often make perfectly good sense to refer to social practices, categories, concepts and beliefs as “social constructions”.

Yet this modest insight cries out for clear intellectual distinctions and detailed application to particular situations, with conscientious linkages to empirical data. It cannot provide a short-cut to moral perspicuity or sound policy formulation. Nor is it inconsistent with a belief in the actual existence of law-governed events in the empirical world, which can be the subject of objective scientific theory and accumulating knowledge.


As Antony Flew once expressed it, what is socially constructed is not reality itself but merely “reality”: the beliefs, meanings and values available within a culture.

Thus, none of what I’ve described so far amounts to “social constructionism” in a pure or philosophical sense, since this would require, in effect, that we never have any knowledge. It would require a thesis that all beliefs are so deeply permeated by socially specific ideas that they never transcend their social conditions of production to the extent of being about objective reality. To take this a step further, even the truth about physical nature would be relative to social institutions – relativism applies all the way down.

Two important points need to be made here. First, even without such a strong concept of socially constructed knowledge, social scientists and humanities scholars have considerable room to pursue research programs aimed at exploring the historically contingent nature of social institutions. In the next section, I argue that this applies quintessentially to socially accepted moral beliefs.

Second, however, there is a question as to why anyone would insist upon the thesis that the nature of reality is somehow relative to social beliefs all the way down, that there is no point at which we ever hit a bedrock of truth and falsity about anything. It is notorious that intellectuals who use such language sometimes retreat, when challenged, to a far more modest or equivocal kind of position.

Certainly, there is no need for anyone’s political or social aims to lead them to deny the mind-independent existence of physical nature, or to suggest that the truth about it is, in an ultimate sense, relative to social beliefs or subjective to particular observers. Nonetheless, many left-wing intellectuals freely express a view in which reality, not “reality”, is a mere social construction.


If social construction theory is to have any significant practical bite, then it has to assert that moral beliefs are part of what is socially constructed. I wish to explore this issue through some more fundamental considerations about ethics.

It is well-documented that there are dramatic contrasts between different societies’ practical beliefs about what is right and wrong, so much so that the philosopher J.L. Mackie said that these “make it difficult to treat those judgements as apprehensions of objective truths.” As Mackie develops the argument, it is not part of some general theory that “the truth is relative”, but involves a careful attempt to show that the diversity of moral beliefs is not analogous to the usual disagreements about the nature of the physical world.

Along with other arguments put by philosophers in Hume’s radical empiricist tradition, Mackie’s appeal to cultural diversity may persuade us that there are no objective moral truths. Indeed, it seems to me that there are only two positions here that are intellectually viable. The first is that Mackie is simply correct. This idea might seem to lead to cultural relativism about morality, but things are not always what they seem.

The second viable position is that there are objective moral truths, but they take the form of principles of an extremely broad nature, broad enough to help shape – rather than being shaped by – a diverse range of social practices in different environmental, economic and other circumstances.

If this is so, particular social practices and practical moral beliefs have some ultimate relationship to fundamental moral principles, but there can be enormous “slippage” between the two, depending on the range of circumstances confronting different human societies. Moreover, during times of rapid change such as industrialised societies have experienced in the last three centuries – and especially the last several decades – social practices and practical moral beliefs might tend to be frozen in place, even though they have become untenable. Conversely, there might be more wisdom, or at least rationality, than is apparent to most Westerners in the practices and moral beliefs of traditional societies. All societies, however, might have practical moral beliefs that are incorrect because of lack of empirical knowledge about the consequences of human conduct.

Taken with my earlier, more general, comments about various aspects of social practices and culturally-accepted “reality”, this approach gives socially liberal thinkers much of what they want. It tends to justify those who would test and criticise the practices and moral beliefs of Western nations while defending the rationality and sophistication of people from colonised cultures.


The academic left’s current hostility to science and the Enlightenment project may have its origins in a general feeling, brought on by the twentieth century’s racial and ideological atrocities, that the Enlightenment has failed. Many intellectuals have come to see science as complicit in terror, oppression and mass killing, rather than as an inspiration for social progress.

The left’s hostility has surely been intensified by a quite specific fear that the reductive study of human biology will cross a bridge from the empirical into the normative realm, where it may start to dictate the political and social agenda in ways that can aptly be described as reactionary. This, at least, is the inference I draw from left-wing intellectuals’ evident detestation of human sociobiology or evolutionary psychology.

The fear may be that dubious research in areas such as evolutionary psychology and/or cognitive neuroscience will be used to rationalise sexist, racist or other illiberal positions. More radically, it may be feared that genuine knowledge of a politically unpalatable or otherwise harmful kind will emerge from these areas. Are such fears justified? To dismiss them lightly would be irresponsible and naive. I can do no more than place them in perspective. The relationship between the social sciences and humanities, on the one hand, and the “hard” end of psychological research, on the other, is one of the most important issues to be tackled by intellectuals in all fields – the physical sciences, social sciences and humanities.

One important biological lesson we have learned is that human beings are not, in any reputable sense, divided into “races”. As an empirical fact of evolutionary history and genetic comparison, we are all so alike that superficial characteristics such as skin or hair colour signify nothing about our moral or intellectual worth, or about the character of our inner experience. Yet, what if it had turned out otherwise? It is understandable if people are frightened by our ability to research such issues. At the same time, the alternative is to suppress rational inquiry in some areas, leaving questions of orthodoxy to however wins the naked contest for power. This is neither rational nor safe.

What implications could scientific knowledge about ourselves have for moral conduct or social policy? No number of factual statements about human nature, by themselves, can ever entail statements that amount to moral knowledge, as Hume demonstrated. What is required is an ethical theory, persuasive on other grounds, that already links “is” and “ought”. This might be found, for example, in a definition of moral action in terms of human flourishing, though it is not clear why we should, as individuals, be concerned about something as abstract as that – why not merely the flourishing of ourselves or our particular loved ones?.

One comfort is that, even if we had a plausible set of empirical and meta-ethical gadgets to connect what we know of human nature to high-level questions about social policy, we would discover significant slippage between levels. Nature does not contradict itself, and no findings from a field such as evolutionary psychology could be inconsistent with the observed facts of cultural diversity. If reductive explanations of human nature became available in more detail, these must turn out to be compatible with the existence of the vast spectrum of viable cultures that human beings have created so far. And there is no reason to believe that a lesser variety of cultures will be workable in the material circumstances of a high-technology future.

The dark side of evolutionary psychology includes, among other things, some scary-looking claims about the reproductive and sociopolitical behaviour of the respective sexes. True, no one seriously asserts that sexual conduct in human societies and the respective roles of men and women within families and extra-familial hierarchies are specified by our genes in a direct or detailed fashion. What, however, are we to make of the controversial analyses of male and female reproductive “strategies” that have been popularised by several writers in the 1990s? Perhaps the best-known exposition is that of Matt Ridley in The Red Queen: Sex and the Evolution of Human Nature (1993). Such accounts offer evidence and argument that men are genetically hardwired to be highly polygamous or promiscuous, while women are similarly programmed to be imperfectly monogamous, as well as sexually deceitful.

In responding to this, first, I am in favour of scrutinising the evidence for such claims very carefully, since they can so readily be adapted to support worn-out stereotypes about the roles of the sexes. That, however, is a reason to show scientific and philosophical rigour, not to accept strong social constructionism about science. Secondly, even if findings similar to those synthesised by Ridley turned out to be correct, the social consequences are by no means apparent. Mere biological facts cannot tell us in some absolute way what are the correct sexual mores for a human society.

To take this a step further, theories about reproductive strategies suggest that there are in-built conflicts between the interests of men and women, and of higher and lower status men, which will inevitably need to be moderated by social compromise, not necessarily in the same way by different cultures. If all this were accepted for the sake of argument, it might destroy a precious notion about ourselves: that there is a simple way for relations between the sexes to be harmonious. On the other hand, it would seem to support rather than refute what might be considered a “progressive” notion: that no one society, certainly not our own, has the absolutely final answer to questions about sexual morality.

Although evolutionary psychology and cognitive neuroscience are potential minefields, it is irrational to pretend that they are incapable of discovering objective knowledge. Fortunately, such knowledge will surely include insight into the slippage between our genetic similarity and the diversity of forms taken by viable cultures. The commonality of human nature will be at a level that is consistent with the (substantial) historical contingency of social practices and of many areas of understanding and evaluative belief. The effect on social policy is likely to be limited, though we may become more charitable about what moral requirements are reasonable for the kinds of creatures that we are.

I should add that evolutionary psychology and cognitive neuroscience are not about to put the humanities, in particular, out of business. There are good reasons why the natural sciences cannot provide a substitute for humanistic explanation, even if we obtain a far deeper understanding of our own genetic and neurophysiological make-up. This is partly because reductive science is ill-equipped to deal with the particularity of complex events, partly because causal explanation may not be all that we want, anyway, when we try to interpret and clarify human experience.


Either there are no objective moral truths or they are of an extremely general kind. Should we, therefore, become cultural relativists?

Over a quarter of a century ago, Bernard Williams made the sharp comment that cultural relativism is “possibly the most absurd view to have been advanced even in moral philosophy”” To get this clear, Williams was criticising a cluster of beliefs that has a great attraction for left-wing academics and many others who preach inter-cultural tolerance: first, that what is “right” means what is right for a particular culture; second, that what is right for a particular culture refers to what is functionally valuable for it; and third, that it is “therefore” wrong for one culture to interfere with the organisation or values of another.

As Williams pointed out, these propositions are internally inconsistent. Not only does the third not follow from the others; it cannot be asserted while the other two are maintained. After all, it may be functionally valuable to culture A (and hence “right” within that culture) for it to develop institutions for imposing its will on culture B. These may include armadas and armies, colonising expeditions, institutionalised intolerance, and aggressively proselytising religions. In fact, nothing positive in the way of moral beliefs, political programs or social policy can ever be derived merely from a theory of cultural relativism.

That does not mean that there are no implications at all from the insight that social practices and beliefs are, to a large degree, contingent on history and circumstance. Depending upon how we elaborate this insight, we may have good reason to suspect that another culture’s odd-looking ways of doing things are more justifiable against universal principles of moral value than is readily apparent. In that case, we may also take the view that the details of how our own society, or an element of it, goes about things are open to challenge as to how far they are (or remain?) justifiable against such universal principles.

If, on the other hand, we simply reject the existence of any objective moral truths – which I have stated to be a philosophically viable position – we will have a more difficult time explaining why we are active in pursuing social change. Certainly, we will not be able to appeal to objectively applicable principles to justify our activity. All the same, we may be able to make positive commitments to ideas such as freedom, equality or benevolence that we find less arbitrary and more psychologically satisfying than mere acquiescence in “the way they do things around here”. In no case, however, can we intellectually justify a course of political and social activism without more general principles or commitments to supplement the bare insight that, in various complicated ways, social beliefs and practices are largely contingent.


An example of an attempt to short-circuit the kind of hard thinking about moral foundations required to deal with contentious issues is Martin F. Katz’s well-known article, “After the Deconstruction: Law in the Age of Post-Structuralism”. Katz is a jurisprudential theorist who is committed to a quite extreme form of relativism about empirical knowledge. In particular, his article explicitly assigns the findings of physical science the same status as the critical interpretations of literary works.

Towards the end of “After the Deconstruction”, Katz uses the abortion debate as an example of how what he calls “deconstructionism” or the “deconstructionist analysis” can clarify and arbitrate social conflict. He begins by stating the debate much as it might be seen by its antagonists:

One side of the debate holds that abortion is wrong because it involves the murder of an unborn baby. The other side of the debate sees abortion as an issue of self-determination; the woman’s right to choose what she does to her body. How do we measure which of these “rights” should take priority?

In order to avoid any sense of evasion, I’ll state clearly that the second of these positions, the “pro-choice” position, is closer to my own. However, either position has more going for it in terms of rationality than what Katz actually advocates.

This, however, is not how Katz proposes to solve the problem of abortion. He begins by stating that “deconstructionism” recommends that we “resist the temptation to weigh the legitimacy of . . . these competing claims.” Instead, we should consider the different “subjugations” supposedly instigated by the pro-life and pro-choice positions. The pro-life position is condemned because it denies women the choice of what role they wish to take in society, while the pro-choice position is apparently praised (though even this is not entirely clear) for shifting the decision about whether and when to have children directly to women.

The trouble with this is that it prematurely forecloses on the metaphysical and ethical positions at stake, leaving everything to be solved in terms of power relations. However, if we believe that a foetus (say at a particular age) is a person in some sense that entails moral regard, or a being that possesses a human soul, then there are moral consequences. Such beliefs, together with some plausible assumptions about our moral principles or commitments, entail that we should accept that aborting the foetus is an immoral act. The fact that banning the abortion may reduce the political power of the woman concerned, or of women generally, over against that of men will seem to have little moral bite, unless we adopt a very deep principle of group political equality. That would require ethical argument of an intensity which Katz never attempts.

If we take it that the foetus is not a person in the relevant sense, we may be far more ready to solve the problem (and to advocate an assignment of “rights”) on the basis of utilitarian, or even libertarian, principles. By contrast, the style of “deconstructionist” thought advocated by Katz threatens to push rational analysis aside altogether, relying on untheorised hunches or feelings about how we wish power to be distributed in our society. This approach can justifiably be condemned as irrational. At the same time, the statements that Katz makes about the political consequences for men or women of banning or legalising abortion are so trite that it is difficult to imagine how anyone not already beguiled by an ideology could think that merely stating them could solve the problem.


In the example of Katz’s article, as in the general argument I have put, the insight that much in our own society’s practices and moral beliefs is “socially constructed” can do only a modest amount of intellectual work. We may have good reason to question the way they do things around here, to subject it to deeper analysis. We may also have good reason to believe that the “odd” ways they do things in other cultures make more sense than is immediately apparent to the culture-bound Western mind. All very well. None of this, however, can undermine the results of systematic empirical inquiry. Nor can it save us from the effort of grappling with inescapable metaphysical and ethical questions, just as we had to do before the deconstruction.

[My Amazon author page]

Solving the Attendance Problem

While philosophy is about inquiry and students should be encouraged to ask questions, there used to be one question I hoped students would not ask. That question was “do I need the book?” I did realize that some students asked this question out of a legitimate concern based on the often limited finances of students. In other cases, it arose from a soul deep hope to avoid the unbearable pain of reading philosophy.

My answer was always an honest “yes.” I must confess that I have heard the evil whispers of the Book Devil trying to tempt me to line my shelves with desk copies or, even worse, get free books to sell to the book buyers. But, I have always resisted this temptation. My will, I must say, was fortified

by memories of buying expensive books that were never actually used by the professors in the classes. Despite the fact that the books for my courses were legitimately required and I diligently sought the best books for the lowest costs, the students still lamented my cruel practice of actually requiring books.

Moved by their terrible suffering, I quested for a solution and found it: technology. Since most of the great philosophers are not only dead but really, really dead, their works are typically in the public domain. This allowed me to assemble free texts for all my classes except Critical Inquiry. These were first distributed via 3.5 inch floppies (kids, ask your parents about these), then via the internet. While I could not include the latest and (allegedly greatest) of contemporary philosophy, the digital books are clearly as good as most of the expensive offerings. The students are, I am pleased to say, happy that the books they will not read will not cost them a penny. Yes, sometimes students now ask “do I have to read the book?” I say “yes.”

Since I make a point of telling the students on day one that the book is a free PDF file (except for the Critical Inquiry text), I rarely hear “do I need to buy the book?” these days. Now students ask “do I have to come to class?” I have to take some of the blame for this—my classes are designed so that all the coursework can be completed or turned in online via Black Board. Technology is thus shown, once again, to be a two-edged sword: it solved the “do I have to buy the book?” problem, but helped create the “do I have to come to class problem.”

When I was first asked this, I was a bit bothered. After all, a reasonable interpretation of the question is “I think I have nothing to learn. I believe you have nothing to teach me. But I’d rather not fail.” Since I have a reasonably good understanding of what people are like, I am confident that this interpretation is often correct. Honesty even compels me to admit that the student could be right: perhaps the student does have nothing to learn from me. After all, various arguments have been advanced over the centuries that philosophy is useless and presumably not worth learning. Things like logic, critical thinking and ethics could be worthless—after all, some people seem to do just fine without them. Some even manage to hold high positions. Or at least want to. However, I am reasonable confident that the majority of students do have something to learn that I can teach them.

After overcoming my initial annoyance, I gave the matter considerable thought. As with the “do I have to buy the book?” question, there could be a good reason for asking. This reason could be that the student needs the time that would otherwise be spent in my class to do things for other classes. Or time to grind for engrams and materials in Destiny. The student might even need the time to work in order to earn money to pay for school.

This was not the first time that I had thought about why students skipped my class. Since April, 2014 I have been collecting survey data from students. While as of this writing I only have 233 responses, 28.8% of students surveyed claimed that work was the primary reason they missed class. 15% claimed that the fact that they could turn in work via Black Board was the reason they skipped class. This reason is currently in second place. 6% claimed they needed to spend time on other classes.

There are some obvious concerns with my survey. The first is that the sample is relatively small at 233 students. The second is that although the survey is completely anonymous, the respondents might be inclined to select the answer they regard as the most laudable reason to miss class. That said, these results do make intuitive sense. One reason is that the majority of students at Florida A&M University are from low-income families and hence often need to work to pay for school. Another reason is that I routinely overhear students talking about their jobs and I sometimes even see students wearing their work uniforms in class.

While it might be suspected that my main concern about attendance is a matter of ego, it is actually a matter of concern for my students. In addition to being curious about why students were skipping my class, I was also interested in why students failed my courses. Fortunately, I had considerable objective data in the form of attendance records, grades, and coursework.

I found a clear correlation between lack of attendance and failing grades. None of the students who failed had perfect attendance and only 27% had better than 50% attendance. This was hardly surprising: students who do not attend class miss out on the lectures, class discussion and the opportunity to ask questions. To use the obvious analogy, these students are like athletes skipping practice and the coursework is analogous to meets or games.

I have been testing a solution to this problem: I am creating YouTube videos of one of my classes and putting the links into Black Board. This way students can view the videos at their convenience and skip or rewind as they desire. As might be suspected given the cast and production values, the view counts are rather low. However, some students have already expressed appreciation for the availability of the videos. If they can reduce the number of students who fail by even a few students each semester, then the effort will be worthwhile. It would also be worthwhile if I went viral and was able to ride that sweet wave of internet fame to some boosted book sales. I do not, however, see that happening. The fame, that is.

I also found that 67.7% of the students who failed did so because of failing scores on work. While this might elicit a response of “duh”, 51% of those who failed did not complete the exams, 45% did not complete the quizzes, and 42% did not complete the paper. As such, while failing grades on the work was a major factor, simply not doing the work was also a significant cause. Interestingly, none of the students who failed completed all the work—part of the reason for the failure was not completing the work. While they might have failed the work even if they had completed it, failure was assured by not making the attempt.

My initial attempt at solving the problem involved having all coursework either on Black Board or capable of being turned in via Black Board. My obvious concern with this solution was the possibility that students would cheat. While there are some awkward and expensive solutions (such as video monitoring) I decided to rely on something I had learned about the homework assigned in my courses: despite having every opportunity to cheat, student performance on out of class work was consistent with their performance on monitored in course work. It was simply a matter of designing questions and tests to make cheating unrewarding. The solution was fairly easy—questions aimed mainly at comprehension, a tight time limit on exams, and massive question banks to generate random exams. This approach seems to have worked: student grades remained very close to those in pre-Black Board days. Students can, of course, try to cheat—but either they are not cheating or they are cheating in ways that has had no impact on the grades. On the plus side, there was an increase in the completion rate of the coursework. However, the increase was not as significant as I had hoped.

In the light of work left uncompleted, I decided to have very generous deadlines for work. Students get a month to complete the quizzes for a section. For exams 1-3 (which cover sections 1-3), students get one month after we finish a section to complete the exam. Exam 4 deadlines at the end of the last day of classes and the final deadlines at the end of the normal final time. The paper deadlines are unchanged from the pre-Black Board days, although now the students can turn in papers from anywhere with internet access and can do so round the clock.

The main impact of this change has been another increase in the completion rate of work, thus decreasing the failure rate in my classes. As should be suspected, there are still students who do not complete all the work and fail much of the work they do complete. While I can certainly do more to provide students with the opportunity to pass, they still have responsibilities. One of mine is, of course, to record their failure.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ontological Zombies

Zombie 2015As a gamer and horror fan I have an undecaying fondness for zombies. Some years back, I was intrigued to learn about philosophical zombies—I had a momentary hope that my fellow philosophers were doing something…well…interesting. But, as so often has been the case, professional philosophers managed to suck the life out of even the already lifeless. Unlike proper flesh devouring products of necromancy or mad science, philosophical zombies lack all coolness.

To bore the reader a bit, philosophical zombies are beings who look and act just like normal humans, but lack consciousness. They are no more inclined to seek the brains of humans than standard humans, although discussions of them can numb the brain. Rather than causing the horror proper to zombies (or the joy of easy XP), philosophical zombies merely bring about a feeling of vague disappointment. This is the same sort of disappointment that you might recall from childhood trick or treating when someone gave you pennies or an apple rather than real candy.

Rather than serving as creepy cannon fodder for vile necromancers or metaphors for vacuous and excessive American consumerism, philosophical zombies serve as victims in philosophical discussions about the mind and consciousness.

The dullness of current philosophical zombies does raise an important question—is it possible to have a philosophical discussion about proper zombies? There is also a second and equally important question—is it possible to have an interesting philosophical discussion about zombies? As I will show, the answers are “yes” and “obviously not.”

Since there is, at least in this world, no Bureau of Zombie Standards and Certification, there are many varieties of zombies. In my games and fiction, I generally define zombies in terms of beings that are biologically dead yet animated (or re-animated, to be more accurate). Traditionally, zombies are “mindless” or at least possess extremely basic awareness (enough to move about and seek victims).

In fiction, many beings called “zombies” do not have these qualities. The zombies in 28 Days are “mindless”, but are still alive. As such, they are not really zombies at all—just infected people. The zombies in Return of the Living Dead are dead and re-animated, but retain their human intelligence. Zombie lords and juju zombies in D&D and Pathfinder are dead and re-animated, but are intelligent. In the real world, there are also what some call zombies—these are organisms taken over and controlled by another organism, such as an ant controlled by a rather nasty fungus. To keep the discussion focused and narrow, I will stick with what I consider proper zombies: biologically dead, yet animated. While I generally consider zombies to be unintelligent, I do not consider that a definitive trait. For folks concerned about how zombies differ from other animate dead, such as vampires and ghouls, the main difference is that stock zombies lack the special powers of more luxurious undead—they have the same basic capabilities as the living creature (mostly moving around, grabbing and biting).

One key issue regarding zombies is whether or not they are possible. There are, of course, various ways to “cheat” in creating zombies—for example, a mechanized skeleton could be embedded in dead flesh to move the flesh about. This would make a rather impressive horror weapon—so look for it in a war coming soon. Another option is to have a corpse driven about by another organism—wearing the body as a “meat suit.” However, these would not be proper zombies since they are not self propelling—just being moved about by something else.

In terms of “scientific” zombies, the usual approaches include strange chemicals, viruses, funguses or other such means of animation. Since it is well-established that electrical shocks can cause dead organisms to move, getting a proper zombie would seem to be an engineering challenge—although making one work properly could require substantial “cheating” (for example, having computerized control nodes in the body that coordinate the manipulation of the dead flesh).

A much more traditional means of animating corpses is via supernatural means. In games like Pathfinder, D&D and Call of Cthulhu, zombies are animated by spells (the classic being animate dead) or by an evil spirit occupying the flesh. In the D&D tradition, zombies (and all undead) are powered by negative energy (while living creatures are powered by positive energy). It is this energy that enables the dead flesh to move about (and violate the usual laws of biology).

While the idea of negative energy is mostly a matter of fantasy games, the notion of unintelligent animating forces is not unprecedented in the history of science and philosophy. For example, Aristotle seems to have considered that the soul (or perhaps a “part” of it) served to animate the body. Past thinkers also considered forces that would animate non-living bodies. As such, it is easy enough to imagine a similar sort of force that could animate a dead body (rather than returning it to life).

The magic “explanation” is the easiest approach, in that it is not really an explanation. It seems safe to hold that magic zombies are not possible in the actual world—though all the zombie stories and movies show it is rather easy to imagine possible worlds inhabited by them.

The idea of a truly dead body moving around in the real world the way fictional zombies do in their fictional worlds does seem somewhat hard to accept. After all, it seems essential to biological creatures that they be alive (to some degree) in order for them to move about under their own power. What would be needed is some sort of force or energy that could move truly dead tissue. While this is clearly conceivable (in the sense that it is easy to imagine), it certainly does not seem possible—at least in this world. Dualists might, of course, be tempted to consider that the immaterial mind could drive the dead shell—after all, this would only be marginally more mysterious than the ghost driving around a living machine. Physicalists, of course, would almost certainly balk at proper zombies—at least until the zombie apocalypse. Then they would be running.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Total Validation Experience

There are many self-help books on the market, but they all suffer from one fatal flaw. That flaw is the assumption that the solution to your problems lies in changing yourself. This is a clearly misguided approach for many reasons.

The first is the most obvious. As the principle of identity states, A=A. Or, put in wordy words, “eaTVEch thing is the same with itself and different from another.” As such, changing yourself is impossible: to change yourself, you would cease to be you. The new person might be better. And, let’s face it, probably would be. But, it would not be you. As such, changing yourself would be ontological suicide and you do not want any part of that.

The second is less obvious, but is totally historical. Parmenides of Elea, a very dead ancient Greek philosopher, showed that change is impossible. I know that “Parmenides” sounds like a cheese, perhaps one that would be good on spaghetti. But, trust me, he was a philosopher and would probably make a poor pasta topping.  Best of all, he laid it out in poetic form, the most truthful of truth conveying word wording:

How could what is perish? How could it have come to be? For if it came into being, it is not; nor is it if ever it is going to be. Thus coming into being is extinguished, and destruction unknown.

Nor was [it] once, nor will [it] be, since [it] is, now, all together, / One, continuous; for what coming-to-be of it will you seek? / In what way, whence, did [it] grow? Neither from what-is-not shall I allow / You to say or think; for it is not to be said or thought / That [it] is not. And what need could have impelled it to grow / Later or sooner, if it began from nothing? Thus [it] must either be completely or not at all.

[What exists] is now, all at once, one and continuous… Nor is it divisible, since it is all alike; nor is there any more or less of it in one place which might prevent it from holding together, but all is full of what is.

And it is all one to me / Where I am to begin; for I shall return there again.

That, I think we can all agree, is completely obvious and utterly decisive. Since you cannot change, you cannot self-help yourself by changing. That is just good logic. I would say more, but I do not get paid by the word to write this stuff. Hell, I do not get paid at all.

But, obviously enough, you want to help yourself to a better life. Since you cannot change and it should be assumed with 100% confidence that you are not the problem, an alternative explanation for your woes is needed. Fortunately, the problem is obvious: other people. The solution is  equally obvious: get new people. Confucius said “Refuse the friendship of all who are not like you.” This was close to the solution, but if you are annoying or a jerk, being friends with annoying jerks is not going to help you. A better solution is to tweak Confucius just a bit: “Refuse the friendship of all who do not like you.” This is a good start, but more is needed. After all, it is obvious that you should just be around people who like you. But that will not be totally validating.

The goal is, of course, to achieve a Total Validation Experience (TVE). A TVE is an experience that fully affirms and validates whatever you feel needs to be validated at the time. It might be your opinion on Mexicans or your belief that your beauty rivals that of Adonis and Helen. Or it might be that your character build in Warcraft is fully and truly optimized.

By following this simple dictate “Refuse the friendship of all who do not totally validate you”, you will achieve the goal that you will never achieve with any self-help book: a vast ego, a completely unshakeable belief that you are right about everything, and all that is good in life. You will never be challenged and never feel doubt. It will truly be the best of all possible worlds. So, get to work on surrounding yourself with Validators.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Performance Based Funding & Disadvantaged Students

As I have discussed in previous essays, Florida state universities now operate under a performance based model of funding and Florida A&M University (FAMU), my university, has performed poorly in regards to the state standards. One area of poor performance is the six-year graduation rate. Another is student loan debts, both in terms of the debt accrued and the default rate. Currently, it has been claimed that FAMU students default on their loans at three times the state average. It has been claimed that one explanation for this poor performance is that FAMU accepts students who are ill-prepared for a four year university. It has also been suggested that such students would be better served by community colleges.

I will not dispute the claim that FAMU admits some students who are ill-prepared for a four year university. This is because the claim is true. One reason it is true is because FAMU has had an historical mission of providing an opportunity for the disadvantaged. One part of this mission is shown by the fact FAMU is an HBCU (Historically Black College and University). Before desegregation, HBCUs provided almost the only higher education opportunities for African-Americans. After the end of legal segregation, HBCUs still served a vital role in providing such opportunities. As predominantly white colleges (PWCs, also known as Predominantly White Institutions or PWIs) became more integrated, people began to argue that this old mission of HBCUs was no longer relevant. After all, if black students can attend any school they wish and racism is no longer a factor, then one might say “mission accomplished.” Unfortunately, as I discussed in my essay on performance based funding and race, race is still a significant factor in regards to economic and academic success. As such, while the dismantling of some barriers to education is to be lauded, many more still remain. Among these are numerous economic barriers.

While it could be argued that FAMU no longer has a mission to offset racism in America by offering educational opportunities to African Americans, FAMU has also had a longstanding mission of serving the economically disadvantaged. Students who come from a background of economic and academic disadvantage (these are almost always tightly linked) face many challenges to graduating and, not surprisingly, are more likely to have student debt. It is well worth considering why disadvantaged students generally perform worse than other students.

One rather obvious factor is that students from poor schools (which tend to be located in economically disadvantaged areas) will face the challenge imposed by being poorly prepared for college. While individuals can overcome this through natural talent and special effort, this poor preparation is analogous to a weight chained to a runner’s leg—she will have to run so much harder to go as fast as others who are not dragging such a burden.

Another especially disturbing factor is that poverty has been found to negatively impact brain development as well as academic performance. Poverty is quite literally damaging American children and thus doing harm to the future of America. Unfortunately, for many politicians the concern regarding children seems to end at birth, so this problem is unlikely to be seriously addressed in the existing political climate.

A third factor is that disadvantaged students, being disadvantaged, generally need to borrow more money than students from wealthier backgrounds. This entails more student debt on the part of the disadvantaged. It also creates a rather vicious scenario: a student who needs to take out loans is more likely to end up with financial challenges in school. A student who is challenged financially is more likely to drop out than a student who is not. Students who drop out are more likely to default on student loans. This provides a rather clear explanation of why disadvantaged students have low completion rates, high debts and high default rates.

As might be expected, seventy percent of African American students say that student debt is their main reason for dropping out. In contrast, less than fifty percent of white students make this claim. This is quite consistent with my own study of student performance: over the course of my study, the primary reason for missing class was work and the main reason students gave for not graduating was financial.

In terms of why students are taking out more and larger loans than any time in United States history, there are some easy and obvious answers. One is the fact that incomes for all but the wealthiest have, at best, stagnated for nearly thirty years—as such people have less money to spend on college and thus need to take out loans. Students also need to work more in college, which can make attending class and completing work challenging.

A second is the fact that state funding for education has dropped substantially as a result of both ideology and the great recession. Even after the broader economy rebounded, education funding was not restored and some states continued to cut funding. With less state funding, universities raised tuition and this, naturally enough, has led to an increased need for students to work more (which impacts graduation rates) and take out loans—which leads to debt. It is a cruel irony that the very people who have cut education funding judge schools by how well they handle the problems such cuts have created or acerbated. To use an analogy, this is like taking a runner’s shoes, striking her legs with a baton and then threatening to do more damage unless she runs even faster than before. This is madness.

Given the factors discussed above, it should hardly be surprising that a school, such as FAMU, that intentionally enrolls disadvantaged students will perform worse than schools who do not have such a mission. Since FAMU’s funding is linked to its performance, it is rather important to consider solutions to this situation.

The state legislature could address this problem in various ways. One approach would be to address the economic and academic inequality that creates disadvantaged students. This, however, seems extremely unlikely in the present political climate.

A second approach would be to restore the education funding that was cut (or even increase it beyond that). However, the current ideological commitment is to cutting education funding while, at the same time, expressing shocked dismay at greater student debt and punishing schools for not solving this problem by taking away even more money. As such, it seems reasonable, though rather unfortunate, to dismiss the state legislature as a source of solutions and instead regard them as a major part of the problem.

For schools such as FAMU, one option is to change the mission of the school to one that matches the views of those providing oversight of the schools. This revised mission would not include providing opportunities to the disadvantaged. Rather, it would involve improving the graduation and debt numbers by ceasing to admit disadvantaged students. On the plus side, this would enable FAMU to improve its performance relative to the goals imposed by the legislature that helped create the student debt crisis and helped lower graduation rates. However, the performance based funding system imposed by the state must have losers, so even if FAMU improved, it might not improve enough to push some other schools to the bottom.  Even if it does improve, it would merely shift the punishment of the state to some other school—which is certainly morally problematic (rather like the old joke about not needing to outrun the bear, just one other person).

On the minus side, abandoning this historic mission of providing opportunity to the disadvantaged would mean abandoning people to the mire of poverty and the desert that is a lack of opportunity. As a professor who teaches ethics, this strikes me as morally reprehensible—especially in a country whose politicians cry endlessly about opportunity, economic mobility and the American Dream.

As has been well-established by history, a college degree is a way to achieve greater economic success and it has been a ladder out of poverty for many previous generations of Americans. To kick away this ladder would be to say that the American Dream is only for those lucky enough to already be well off and the rest can simply stay at the bottom. This could be done, but if it is done, then we must no longer speak of this being a land of opportunity for everyone.

It might be countered that, as was suggested, the disadvantaged students could attend a two-year college. While this idea has become something of a talking point, the evidence shows that it is actually not a low cost, low debt option for students. Because of higher education costs and reduced state support, disadvantaged students will still need to take out loans to attend such schools and will face the same general challenges they would face at a four-year institute.

It could also be countered that enrolling disadvantaged students does not actually help them. After all, if they do not graduate and end up accumulating considerable debt, then it could be argued they would have been better off never making the attempt. They could, instead, go straight to work right out of high school (or complete some technical training). The money that would have been spent on them could be spent on students more likely to succeed (because they already enjoy advantages).

While I am committed to the value of education, this is a point well worth considering. If an objective and fair assessment of the data shows that disadvantage students are worse off when they attempt a four-year degree, then it would make no sense to admit such students. However, if the data shows that providing such students with this opportunity does provide positive benefits, then it would seem a good idea to continue to offer people a chance to escape to a better future from a disadvantaged past. This is, of course, a matter of value—how much is it worth to society to provide such opportunities and at what point should we, as a people, say that the cost is too high to give our fellow Americans an opportunity? Or, put another way, how much are we willing to spend to be able to speak about the American Dream without speaking lies?


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Running Down the Hill of Life

Turkey Trot 11-27-2008Each of us has their own hill (or mound or even mountain) that is life. I can see the hills of other people. Some are still populated, some still bear the warm footprints of a recently departed fellow runner (goodbye Eric), and so very many of the others are cold with long abandonment. While I can see these other hills, I can only run my own and no one else can run mine. That is how it is, poetry and movies notwithstanding.  In truth, we all run alone.

I am in fact and metaphor a distance runner. Running the marathon and even greater distances, gave me a sneak preview of old age. I finished my first marathon at the age of 22, at the peak of my strength, crossing the line in 2:45. Having consulted with old feet at marathons, I knew that the miles would beat me like a piñata—only instead of candy, I would be full of pain. I hobbled along slowly for the next few days—barely able to run. But, being young, I was soon back up to speed, forgetting that brief taste of the cruelty of time. But time never forgets us.

We runners have an obsession with numbers. We record our race times, our training distances and many other things. While everyone is aware that the march of time eventually becomes a slide downhill, runners are thus forced to face the objective quantification of their decline. Though I started running in high school, I did not become a runner until after my first year as a college athlete in 1985 and I only started recording my run data back in 1987. I, with complete faith in my young brain, was sure I would remember my times forever.

My first victory in a 5K was in 1985—I ran an 18:20 for the win. My time improved considerably: I broke 18, then 17 and (if my memory is not a false one) even 16. Then, as must happen, I reached the peak of my running hill and the decline began. I struggled to stay under 17, fought to stay under 18, battled to stay below 19, and then warred to remain below 20. The realization of the damage done by time sunk home when my 5K race pace was the same as the pace for my first marathon. Once, I sailed through 26.2 miles at about a 6:20 per mile pace. Now I have to work hard to do that for a 5K. Another marker was when my 5-mile race time finally became slower than my 10K race time (33 minutes). Damn the numbers.

Each summer, I return to my home town and run the routes of my youth. Back in the day, I would run 16 miles at a 7 minute per mile pace. Now I shuffle along 2 and a half minutes per mile slower. But, dragging all those years will slow a man down. When I run those old routes, I speed up when I hit the coolness of the pine forest—the years momentarily drop away and I feel like a young man again. But, like the deerflies that haunt my run, they soon catch up. Like the deerflies, the years bite. Unlike the deerflies, I cannot just swat them down. Rather, they are swatting me down and, like many a deerfly, I will eventually be crushed and broken by a great hand. In this case, not the hand of some guy from Maine, but the hand of time. Someday, as has happened to friends, I will go out for a run and never come back. But until that day, the run goes on. And on.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Muslims, Bigotry & History

English: John F. Kennedy, former President of ...

In the September of 2015 Republican presidential candidate Ben Carson took some heat for his remarks regarding Muslims. His fellow candidate, Donald Trump, has also faced some criticism for his persistence in feeding the suspicions that President Obama is a secret Muslim. Some of the fine folks at Fox and other conservative pundits have an established history of what some critics regard as anti-Muslim bigotry.

As might be suspected, those accused of such bigotry respond with claims that they are not bigots—they are merely telling the truth about Islam. Ben Carson echoed a common anti-Muslim claim when he asserted that a Muslim should not be President because “Muslims feel that their religion is very much a part of your public life and what you do as a public official, and that’s inconsistent with our principles and our Constitution.” There are also the stock claims that nearly all Muslims wish to impose Sharia law on America, that Islam (unlike any other faith) cannot become a part of American society, and that taqiyya allows Muslims a license to lie to achieve their (nefarious) goals. The assertion about taqiyya is especially useful—any attempt by Muslims to refute these accusations can be dismissed as falling under taqiyya.

It is not always clear if the bigotry expressed against Muslims is “honest” bigotry (that is, the person really believes what he says) or if it is an attempt at political manipulation. While “honest” bigotry is bad enough, feeding the fires of hatred for political gain is perhaps even worse. This sort of bigotry in politics is, obviously, nothing new. In fact, there is a historical cycle of bigotry.

Though I am not a Mormon, in 2011 I wrote a defense of Mitt Romney and Mormonism against accusations that Mormonism is a cult. I have also written in defense of the claim that Mormonism is a form of Christianity. While the religious bigotry against Romney was not very broad in scope, it was present and is similar to the bigotry in play against Muslims today.

Perhaps the best known previous example of bigotry against a religion in America is the anti-Catholicism that was rampant before Kennedy became President. Interestingly, the accusations against American Catholics are mirrored in some of the current accusations against American Muslims—that a Catholic politician would be controlled by an outside religious power, that a Catholic politician would impose his religious rules on America and so on. As is now evident, these accusations proved baseless and now Catholics are accepted as “real” Americans, fit for holding public office. In fact, a significant percentage of Congress is Catholic. Given that the accusations against Catholicism turned out to be untrue, it seems reasonable to consider that the same accusations against Islam are also untrue.

The bigotry against Muslims has also been compared to the mass internment of Japanese Americans during WWII. In an exchange with a questioner who asked “when can we get rid of them?” (“them” being Muslims), Trump responded that he will “looking at that and plenty of other things.” In the case of Japanese Americans, the fear was that they would serve as spies  and saboteurs for Japan, despite being American citizens. The reality was, of course, that Japanese Americans served America just as loyally as German Americans and Italian Americans. The bigotry against Muslims seems to be rather similar to the same bigotry that led to “getting rid of” Japanese Americans. I would hope that what we learned as a country from the injustice against the Japanese Americans would make any decent American ashamed of talk of getting rid of American citizens.

While it is possible that Islam is the one religion that cannot become part of American society, history shows that claims that seem to be bigotry generally turn out to be just that. As such, it seems rather reasonable to regard the accusations against American Muslims as bigotry. This is not to make the absurd claim that every single American Muslim is an ideal, law abiding citizen—just a refutation of unthinking bigotry.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


There is a popular belief that Mohawks have no fear of heights. Though part Mohawk, I apparently did not get the part that is fearless about heights: I am terrified of heights. But, I believe a person should not be ruled by fear and so never let that fear control me. This explains how I ended up falling off the roof and tearing my quadriceps tendon, thus showing that too much philosophy can bust you up. For those not familiar with this important body part, it is that tendon that allows one to do such things as stand and walk. But, I digress—time to leave the subject of falling and get on with the topic at hand.

This fear of heights applies to flying—as soon as I buy my tickets, I start experiencing a sense of dread. In the past, my rather masochistic coping method was to get a window seat and force myself to stare downwards at the ever more distant earth. I got this approach from Aristotle, the stoics and running: one becomes what one does, attitude matters a great deal, and the way to learn to endure pain is to face that pain. While I still dislike heights, the fear is now “at distance”—it is, to use a metaphor, as if I am looking at it from a great height. So, while too much philosophy can bust one up, it can also provide a useful theoretical foundation for weird coping mechanisms. And some say that philosophy is useless.

These days my main dislike of flying is that the process, at least for most of us, is unpleasant. In the United States, we are forced into bit parts in the security theater. Shoes must be removed, forcing us to shuffle along in socks (or barefoot) which feels just a bit humiliating.  It is as if we are bad children who might track dirt into the pristine airport. Next is the body scan—which is apparently useless because I am always patted down anyway after the scan. But, perhaps people really cannot resist running their hands over my awesome bod. Or a look like a criminal. With an awesome bod.

Then there is the ritual of getting dressed again—shoes on, belt on, watch back on, wallet back in the pocket and so on. Sort of a wham, bam, thank you Sam sort of situation. Some folks do get to bypass some of the process—those willing to shuck out some extra cash and time getting checked by the state. I call this process theater for the obvious reason that it is theater—the security can be easily bypassed and seems based on the principle that discomforting and humiliating people will make them feel safer. That said, I have friends and relatives in the TSA and think well of them—they are good people. The system, which they do not control, is another matter.

While I usually fly Delta, I suspect most airlines have a similar boarding process. Like an oppressive state, Delta has a very rigid class system that governs one’s privileges and one’s abuse. While folks with special needs get to go first, after that there are various distinct groups—these seemed to be named on the basis of precious substances like diamonds, gold and quatloos. I assume this is because to get in those groups one must have an adequate supply of diamonds or gold.

Back in the day, boarding early was not much of a privilege: one just got to sit in the plane longer. However, when airlines started charging people for luggage, getting on early became rather important. When everyone is trying to bring on as much as possible as carryon luggage, getting on the plane early can make the difference between jamming that giant rolling “carry on” into the overhead or having it subject to the tender mercies of baggage handling. Interestingly, airlines have started offering to check large carryon luggage for free when flights are crowded—their solution to the problem created by charging for checked luggage is to offer free checked luggage. I suspect that this creates some sort of paradox and that Christopher Nolan will include it in his next movie. There also seems to be a prestige associated with boarding early—folks who can afford the Royal Secret Diamond Elite Magic Flyer level can presumably afford to pay for checked luggage (though they often seem well-laden with carryon luggage as well).

First class, as the name implies, also enjoys better treatment: they have larger seats, get to board early, and generally have better snacks and drinks. They also seem to get special treatment: while the boarding of my last flight was underway, the stewardess had to delay the progress of the little people (coach class) to bring beverages to two folks in first class. We waited there, holding our carryon luggage, until she brought them their drinks and returned. I waited for her—she was just doing her job. I was not very happy with the first class folks—it is a bit classless to hold up boarding because one cannot wait a few minutes for a drink.

On the plus side, the time spent waiting for the better folks to receive their drinks gave me time to apply some pseudo-Marxism to the oppressive class system of the airlines. Since I lack Marx’s writing chops, the best slogan I could come up with was “flyers of the world unite! You have nothing to lose but your cramped seats and comically limited overhead space!” I am certainly looking forward to the classless utopia of the future in which each person is seated according to her size and pays in accord with how much crap she brings on the plane. Plus booze for everyone.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter