Category Archives: State of the Profession

The Left’s Defection from Progress

Note: This is a slightly abridged (but otherwise largely warts and all) version of an article that I had published in Quadrant magazine in April 1999. It has not previously been published online (except that I am cross-posting on my own blog, Metamagician and the Hellfire Club). While my views have developed somewhat in the interim, there may be some advantage in republishing it for a new audience, especially at a time when there is much discussion of a “regressive left”.

I.

In a recent mini-review of David Stove’s Anything Goes: Origins of Scientific Irrationalism (originally published in 1982 as Popper and After), Diane Carlyle and Nick Walker make a casual reference to Stove’s “reactionary polemic”. By contrast, they refer to the philosophies of science that Stove attacks as “progressive notions of culture-based scientific knowledge”. To say the least, this appears tendentious.

To be fair, Carlyle and Walker end up saying some favourable things about Stove’s book. What is nonetheless alarming about their review is that it evidences just how easy it has become to write as if scientific realism were inherently “reactionary” and the more or less relativist views of scientific knowledge that predominate among social scientists and humanities scholars were “progressive”.

The words “reactionary” and “progressive” usually attach themselves to political and social movements, some kind of traditionalist or conservative backlash versus an attempt to advance political liberties or social equality. Perhaps Carlyle and Walker had another sense in mind, but the connotations of their words are pretty inescapable. Moreover, they would know as well as I do that there is now a prevalent equation within the social sciences and humanities of relativist conceptions of truth and reality with left-wing social critique, and of scientific realism with the political right. Carlyle and Walker wrote their piece against that background. But where does it leave those of us who retain at least a temperamental attachment to the left, however nebulous that concept is becoming, while remaining committed to scientific realism? To adapt a phrase from Christina Hoff Sommers, we are entitled to ask about who has been stealing socially liberal thought in general.

Is the life of reason and liberty (intellectual and otherwise) that some people currently enjoy in some countries no more than an historical anomaly, a short-lived bubble that will soon burst? It may well appear so. Observe the dreadful credulity of the general public in relation to mysticism, magic and pseudoscience, and the same public’s preponderant ignorance of genuine science. Factor in the lowbrow popularity of religious fundamentalism and the anti-scientific rantings of highbrow conservatives such as Bryan Appleyard. Yet the sharpest goad to despair is the appearance that what passes for the intellectual and artistic left has now repudiated the Enlightenment project of conjoined scientific and social progress.

Many theorists in the social sciences and humanities appear obsessed with dismantling the entirety of post-Enlightenment political, philosophical and scientific thought. This is imagined to be a progressive act, desirable to promote the various social, cultural and other causes that have become politically urgent in recent decades, particularly those associated with sex, race, and the aftermath of colonialism. The positions on these latter issues taken by university-based theorists give them a claim to belong to, if not actually constitute, the “academic left”, and I’ll refer to them with this shorthand expression.

There is, however, nothing inherently left-wing about wishing to sweep away our Enlightenment legacy. Nor is a commitment to scientific inquiry and hard philosophical analysis inconsistent with socially liberal views. Abandonment of the project of rational inquiry, with its cross-checking of knowledge in different fields, merely opens the door to the worst kind of politics that the historical left could imagine, for the alternative is that “truth” be determined by whoever, at particular times and places, possesses sufficient political or rhetorical power to decide what beliefs are orthodox. The rationality of our society is at stake, but so is the fate of the left itself, if it is so foolish as to abandon the standards of reason for something more like a brute contest for power.

It is difficult to know where to start in criticising the academic left’s contribution to our society’s anti-rationalist drift. The approaches I am gesturing towards are diverse among themselves, as well as being professed in the universities side by side with more traditional methods of analysing society and culture. There is considerable useful dialogue among all these approaches, and it can be difficult obtaining an accurate idea of specific influences within the general intellectual milieu.

However, amidst all the intellectual currents and cross-currents, it is possible to find something of a common element in the thesis or assumption (sometimes one, sometimes the other) that reality, or our knowledge of it, is “socially constructed”. There are many things this might mean, and I explain below why I do not quarrel with them all.

In the extreme, however, our conceptions of reality, truth and knowledge are relativised, leading to absurd doctrines, such as the repudiation of deductive logic or the denial of a mind-independent world. Symptomatic of the approach I am condemning is a subordination of the intellectual quest for knowledge and understanding to political and social advocacy. Some writers are prepared to misrepresent mathematical and scientific findings for the purposes of point scoring or intellectual play, or the simple pleasure of ego-strutting. All this is antithetical to Enlightenment values, but so much – it might be said – for the Enlightenment.

II.

The notion that reality is socially constructed would be attractive and defensible if it were restricted to a thesis about the considerable historical contingency of any culture’s social practices and mores, and its systems of belief, understanding and evaluation. These are, indeed, shaped partly by the way they co-evolve and “fit” with each other, and by the culture’s underlying economic and other material circumstances.

The body of beliefs available to anyone will be constrained by the circumstances of her culture, including its attitude to free inquiry, the concepts it has already built up for understanding the world, and its available technologies for the gathering of data. Though Stove is surely correct to emphasise that the accumulation of empirical knowledge since the 17th century has been genuine, the directions taken by science have been influenced by pre-existing values and beliefs. Meanwhile, social practices, metaphysical and ethical (rather than empirical) beliefs, the methods by which society is organised and by which human beings understand their experience are none of them determined in any simple, direct or uniform way by human “nature” or biology, or by transcendental events.

So far, so good – but none of this is to suggest that all of these categories should or can be treated in exactly the same way. Take the domain of metaphysical questions. Philosophers working in metaphysics are concerned to understand such fundamentals as space, time, causation, the kinds of substances that ultimately exist, the nature of consciousness and the self. The answers cannot simply be “read off” our access to empirical data or our most fundamental scientific theories, or some body of transcendental knowledge. Nonetheless, I am content to assume that all these questions, however intractable we find them, have correct answers.

The case of ethical disagreement may be very different, and I discuss it in more detail below. It may be that widespread and deep ethical disagreement actually evidences the correctness of a particular metaphysical (and meta-ethical) theory – that there are no objectively existing properties of moral good and evil. Yet, to the extent that they depend upon empirical beliefs about the consequences of human conduct, practical moral judgements may often be reconcilable. Your attitude to the rights of homosexuals will differ from mine if yours is based on a belief that homosexual acts cause earthquakes.

Again, the social practices of historical societies may turn out to be constrained by our biology in a way that is not true of the ultimate answers to questions of metaphysics. All these are areas where human behaviour and belief may be shaped by material circumstances and the way they fit with each other, and relatively unconstrained by empirical knowledge. But, to repeat, they are not all the same.

Where this appears to lead us is that, for complicated reasons and in awkward ways, there is much about the practices and beliefs of different cultures that is contingent on history. In particular, the way institutions are built up around experience is more or less historically contingent, dependent largely upon economic and environmental circumstances and on earlier or co-evolving layers of political and social structures. Much of our activity as human beings in the realms of understanding, organising, valuing and responding to experience can reasonably be described as “socially constructed”, and it will often make perfectly good sense to refer to social practices, categories, concepts and beliefs as “social constructions”.

Yet this modest insight cries out for clear intellectual distinctions and detailed application to particular situations, with conscientious linkages to empirical data. It cannot provide a short-cut to moral perspicuity or sound policy formulation. Nor is it inconsistent with a belief in the actual existence of law-governed events in the empirical world, which can be the subject of objective scientific theory and accumulating knowledge.

III.

As Antony Flew once expressed it, what is socially constructed is not reality itself but merely “reality”: the beliefs, meanings and values available within a culture.

Thus, none of what I’ve described so far amounts to “social constructionism” in a pure or philosophical sense, since this would require, in effect, that we never have any knowledge. It would require a thesis that all beliefs are so deeply permeated by socially specific ideas that they never transcend their social conditions of production to the extent of being about objective reality. To take this a step further, even the truth about physical nature would be relative to social institutions – relativism applies all the way down.

Two important points need to be made here. First, even without such a strong concept of socially constructed knowledge, social scientists and humanities scholars have considerable room to pursue research programs aimed at exploring the historically contingent nature of social institutions. In the next section, I argue that this applies quintessentially to socially accepted moral beliefs.

Second, however, there is a question as to why anyone would insist upon the thesis that the nature of reality is somehow relative to social beliefs all the way down, that there is no point at which we ever hit a bedrock of truth and falsity about anything. It is notorious that intellectuals who use such language sometimes retreat, when challenged, to a far more modest or equivocal kind of position.

Certainly, there is no need for anyone’s political or social aims to lead them to deny the mind-independent existence of physical nature, or to suggest that the truth about it is, in an ultimate sense, relative to social beliefs or subjective to particular observers. Nonetheless, many left-wing intellectuals freely express a view in which reality, not “reality”, is a mere social construction.

IV.

If social construction theory is to have any significant practical bite, then it has to assert that moral beliefs are part of what is socially constructed. I wish to explore this issue through some more fundamental considerations about ethics.

It is well-documented that there are dramatic contrasts between different societies’ practical beliefs about what is right and wrong, so much so that the philosopher J.L. Mackie said that these “make it difficult to treat those judgements as apprehensions of objective truths.” As Mackie develops the argument, it is not part of some general theory that “the truth is relative”, but involves a careful attempt to show that the diversity of moral beliefs is not analogous to the usual disagreements about the nature of the physical world.

Along with other arguments put by philosophers in Hume’s radical empiricist tradition, Mackie’s appeal to cultural diversity may persuade us that there are no objective moral truths. Indeed, it seems to me that there are only two positions here that are intellectually viable. The first is that Mackie is simply correct. This idea might seem to lead to cultural relativism about morality, but things are not always what they seem.

The second viable position is that there are objective moral truths, but they take the form of principles of an extremely broad nature, broad enough to help shape – rather than being shaped by – a diverse range of social practices in different environmental, economic and other circumstances.

If this is so, particular social practices and practical moral beliefs have some ultimate relationship to fundamental moral principles, but there can be enormous “slippage” between the two, depending on the range of circumstances confronting different human societies. Moreover, during times of rapid change such as industrialised societies have experienced in the last three centuries – and especially the last several decades – social practices and practical moral beliefs might tend to be frozen in place, even though they have become untenable. Conversely, there might be more wisdom, or at least rationality, than is apparent to most Westerners in the practices and moral beliefs of traditional societies. All societies, however, might have practical moral beliefs that are incorrect because of lack of empirical knowledge about the consequences of human conduct.

Taken with my earlier, more general, comments about various aspects of social practices and culturally-accepted “reality”, this approach gives socially liberal thinkers much of what they want. It tends to justify those who would test and criticise the practices and moral beliefs of Western nations while defending the rationality and sophistication of people from colonised cultures.

V.

The academic left’s current hostility to science and the Enlightenment project may have its origins in a general feeling, brought on by the twentieth century’s racial and ideological atrocities, that the Enlightenment has failed. Many intellectuals have come to see science as complicit in terror, oppression and mass killing, rather than as an inspiration for social progress.

The left’s hostility has surely been intensified by a quite specific fear that the reductive study of human biology will cross a bridge from the empirical into the normative realm, where it may start to dictate the political and social agenda in ways that can aptly be described as reactionary. This, at least, is the inference I draw from left-wing intellectuals’ evident detestation of human sociobiology or evolutionary psychology.

The fear may be that dubious research in areas such as evolutionary psychology and/or cognitive neuroscience will be used to rationalise sexist, racist or other illiberal positions. More radically, it may be feared that genuine knowledge of a politically unpalatable or otherwise harmful kind will emerge from these areas. Are such fears justified? To dismiss them lightly would be irresponsible and naive. I can do no more than place them in perspective. The relationship between the social sciences and humanities, on the one hand, and the “hard” end of psychological research, on the other, is one of the most important issues to be tackled by intellectuals in all fields – the physical sciences, social sciences and humanities.

One important biological lesson we have learned is that human beings are not, in any reputable sense, divided into “races”. As an empirical fact of evolutionary history and genetic comparison, we are all so alike that superficial characteristics such as skin or hair colour signify nothing about our moral or intellectual worth, or about the character of our inner experience. Yet, what if it had turned out otherwise? It is understandable if people are frightened by our ability to research such issues. At the same time, the alternative is to suppress rational inquiry in some areas, leaving questions of orthodoxy to however wins the naked contest for power. This is neither rational nor safe.

What implications could scientific knowledge about ourselves have for moral conduct or social policy? No number of factual statements about human nature, by themselves, can ever entail statements that amount to moral knowledge, as Hume demonstrated. What is required is an ethical theory, persuasive on other grounds, that already links “is” and “ought”. This might be found, for example, in a definition of moral action in terms of human flourishing, though it is not clear why we should, as individuals, be concerned about something as abstract as that – why not merely the flourishing of ourselves or our particular loved ones?.

One comfort is that, even if we had a plausible set of empirical and meta-ethical gadgets to connect what we know of human nature to high-level questions about social policy, we would discover significant slippage between levels. Nature does not contradict itself, and no findings from a field such as evolutionary psychology could be inconsistent with the observed facts of cultural diversity. If reductive explanations of human nature became available in more detail, these must turn out to be compatible with the existence of the vast spectrum of viable cultures that human beings have created so far. And there is no reason to believe that a lesser variety of cultures will be workable in the material circumstances of a high-technology future.

The dark side of evolutionary psychology includes, among other things, some scary-looking claims about the reproductive and sociopolitical behaviour of the respective sexes. True, no one seriously asserts that sexual conduct in human societies and the respective roles of men and women within families and extra-familial hierarchies are specified by our genes in a direct or detailed fashion. What, however, are we to make of the controversial analyses of male and female reproductive “strategies” that have been popularised by several writers in the 1990s? Perhaps the best-known exposition is that of Matt Ridley in The Red Queen: Sex and the Evolution of Human Nature (1993). Such accounts offer evidence and argument that men are genetically hardwired to be highly polygamous or promiscuous, while women are similarly programmed to be imperfectly monogamous, as well as sexually deceitful.

In responding to this, first, I am in favour of scrutinising the evidence for such claims very carefully, since they can so readily be adapted to support worn-out stereotypes about the roles of the sexes. That, however, is a reason to show scientific and philosophical rigour, not to accept strong social constructionism about science. Secondly, even if findings similar to those synthesised by Ridley turned out to be correct, the social consequences are by no means apparent. Mere biological facts cannot tell us in some absolute way what are the correct sexual mores for a human society.

To take this a step further, theories about reproductive strategies suggest that there are in-built conflicts between the interests of men and women, and of higher and lower status men, which will inevitably need to be moderated by social compromise, not necessarily in the same way by different cultures. If all this were accepted for the sake of argument, it might destroy a precious notion about ourselves: that there is a simple way for relations between the sexes to be harmonious. On the other hand, it would seem to support rather than refute what might be considered a “progressive” notion: that no one society, certainly not our own, has the absolutely final answer to questions about sexual morality.

Although evolutionary psychology and cognitive neuroscience are potential minefields, it is irrational to pretend that they are incapable of discovering objective knowledge. Fortunately, such knowledge will surely include insight into the slippage between our genetic similarity and the diversity of forms taken by viable cultures. The commonality of human nature will be at a level that is consistent with the (substantial) historical contingency of social practices and of many areas of understanding and evaluative belief. The effect on social policy is likely to be limited, though we may become more charitable about what moral requirements are reasonable for the kinds of creatures that we are.

I should add that evolutionary psychology and cognitive neuroscience are not about to put the humanities, in particular, out of business. There are good reasons why the natural sciences cannot provide a substitute for humanistic explanation, even if we obtain a far deeper understanding of our own genetic and neurophysiological make-up. This is partly because reductive science is ill-equipped to deal with the particularity of complex events, partly because causal explanation may not be all that we want, anyway, when we try to interpret and clarify human experience.

VI.

Either there are no objective moral truths or they are of an extremely general kind. Should we, therefore, become cultural relativists?

Over a quarter of a century ago, Bernard Williams made the sharp comment that cultural relativism is “possibly the most absurd view to have been advanced even in moral philosophy”” To get this clear, Williams was criticising a cluster of beliefs that has a great attraction for left-wing academics and many others who preach inter-cultural tolerance: first, that what is “right” means what is right for a particular culture; second, that what is right for a particular culture refers to what is functionally valuable for it; and third, that it is “therefore” wrong for one culture to interfere with the organisation or values of another.

As Williams pointed out, these propositions are internally inconsistent. Not only does the third not follow from the others; it cannot be asserted while the other two are maintained. After all, it may be functionally valuable to culture A (and hence “right” within that culture) for it to develop institutions for imposing its will on culture B. These may include armadas and armies, colonising expeditions, institutionalised intolerance, and aggressively proselytising religions. In fact, nothing positive in the way of moral beliefs, political programs or social policy can ever be derived merely from a theory of cultural relativism.

That does not mean that there are no implications at all from the insight that social practices and beliefs are, to a large degree, contingent on history and circumstance. Depending upon how we elaborate this insight, we may have good reason to suspect that another culture’s odd-looking ways of doing things are more justifiable against universal principles of moral value than is readily apparent. In that case, we may also take the view that the details of how our own society, or an element of it, goes about things are open to challenge as to how far they are (or remain?) justifiable against such universal principles.

If, on the other hand, we simply reject the existence of any objective moral truths – which I have stated to be a philosophically viable position – we will have a more difficult time explaining why we are active in pursuing social change. Certainly, we will not be able to appeal to objectively applicable principles to justify our activity. All the same, we may be able to make positive commitments to ideas such as freedom, equality or benevolence that we find less arbitrary and more psychologically satisfying than mere acquiescence in “the way they do things around here”. In no case, however, can we intellectually justify a course of political and social activism without more general principles or commitments to supplement the bare insight that, in various complicated ways, social beliefs and practices are largely contingent.

VII.

An example of an attempt to short-circuit the kind of hard thinking about moral foundations required to deal with contentious issues is Martin F. Katz’s well-known article, “After the Deconstruction: Law in the Age of Post-Structuralism”. Katz is a jurisprudential theorist who is committed to a quite extreme form of relativism about empirical knowledge. In particular, his article explicitly assigns the findings of physical science the same status as the critical interpretations of literary works.

Towards the end of “After the Deconstruction”, Katz uses the abortion debate as an example of how what he calls “deconstructionism” or the “deconstructionist analysis” can clarify and arbitrate social conflict. He begins by stating the debate much as it might be seen by its antagonists:

One side of the debate holds that abortion is wrong because it involves the murder of an unborn baby. The other side of the debate sees abortion as an issue of self-determination; the woman’s right to choose what she does to her body. How do we measure which of these “rights” should take priority?

In order to avoid any sense of evasion, I’ll state clearly that the second of these positions, the “pro-choice” position, is closer to my own. However, either position has more going for it in terms of rationality than what Katz actually advocates.

This, however, is not how Katz proposes to solve the problem of abortion. He begins by stating that “deconstructionism” recommends that we “resist the temptation to weigh the legitimacy of . . . these competing claims.” Instead, we should consider the different “subjugations” supposedly instigated by the pro-life and pro-choice positions. The pro-life position is condemned because it denies women the choice of what role they wish to take in society, while the pro-choice position is apparently praised (though even this is not entirely clear) for shifting the decision about whether and when to have children directly to women.

The trouble with this is that it prematurely forecloses on the metaphysical and ethical positions at stake, leaving everything to be solved in terms of power relations. However, if we believe that a foetus (say at a particular age) is a person in some sense that entails moral regard, or a being that possesses a human soul, then there are moral consequences. Such beliefs, together with some plausible assumptions about our moral principles or commitments, entail that we should accept that aborting the foetus is an immoral act. The fact that banning the abortion may reduce the political power of the woman concerned, or of women generally, over against that of men will seem to have little moral bite, unless we adopt a very deep principle of group political equality. That would require ethical argument of an intensity which Katz never attempts.

If we take it that the foetus is not a person in the relevant sense, we may be far more ready to solve the problem (and to advocate an assignment of “rights”) on the basis of utilitarian, or even libertarian, principles. By contrast, the style of “deconstructionist” thought advocated by Katz threatens to push rational analysis aside altogether, relying on untheorised hunches or feelings about how we wish power to be distributed in our society. This approach can justifiably be condemned as irrational. At the same time, the statements that Katz makes about the political consequences for men or women of banning or legalising abortion are so trite that it is difficult to imagine how anyone not already beguiled by an ideology could think that merely stating them could solve the problem.

VIII.

In the example of Katz’s article, as in the general argument I have put, the insight that much in our own society’s practices and moral beliefs is “socially constructed” can do only a modest amount of intellectual work. We may have good reason to question the way they do things around here, to subject it to deeper analysis. We may also have good reason to believe that the “odd” ways they do things in other cultures make more sense than is immediately apparent to the culture-bound Western mind. All very well. None of this, however, can undermine the results of systematic empirical inquiry. Nor can it save us from the effort of grappling with inescapable metaphysical and ethical questions, just as we had to do before the deconstruction.

[My Amazon author page]

Philosophy versus science versus politics

Russell Blackford, University of Newcastle

We might hope that good arguments will eventually drive out bad arguments – in what Timothy Williamson calls “a reverse analogue of Gresham’s Law” – and we might want (almost?) complete freedom for ideas and arguments, rather than suppressing potentially valuable ones.

Unfortunately, it takes honesty and effort before the good arguments can defeat the bad.

Williamson on philosophy and science

In a field such as philosophy, the reverse Gresham’s Law analogue may be too optimistic, as Williamson suggests.

Williamson points out that very often a philosopher profoundly wants one answer rather than another to be the right one. He or she may thus be predisposed to accept certain arguments and to reject others. If the level of obscurity is high in a particular field of discussion (as will almost always be the case with philosophical controversies), “wishful thinking may be more powerful than the ability to distinguish good arguments from bad”. So much so “that convergence in the evaluation of arguments never occurs.”

Williamson has a compelling point. Part of the seemingly intractable dissensus in philosophy comes from motivated reasoning about the issues. There is a potential for intellectual disaster in the combination of: 1) strong preferences for certain conclusions; and 2) very broad latitude for disagreement about the evidence and the arguments.

This helps to explain why many philosophical disagreements appear to be, for practical purposes, intractable. In such cases, rival philosophical theories may become increasingly sophisticated, and yet none can obtain a conclusive victory over its rivals. As a result, philosophical investigation does not converge on robust findings. A sort of progress may result, but not in the same way as in the natural sciences.

By way of comparison, Williamson imagines a difficult scientific dispute. Two rival theories may have committed proponents “who have invested much time, energy, and emotion”, and only high-order experimental skills can decide which theory is correct. If the standards of the relevant scientific community are high enough in terms of conscientiousness and accuracy, the truth will eventually prevail. But if the scientific community is just a bit more tolerant of what Williamson calls “sloppiness and rhetorical obfuscation” both rival theories may survive indefinitely, with neither ever being decisively refuted.

All that’s required for things to go wrong is a bit less care in protecting samples from impurity, a bit more preparedness to accept ad hoc hypotheses, a bit more swiftness in dismissing opposing arguments as question-begging. “A small difference in how carefully standards are applied can make a large difference between eventual convergence and eventual divergence”, he says.

For Williamson, the moral of the story is that philosophy has more chance of making progress if philosophers are rigorous and more demanding of themselves, and if they are open to being wrong. Much philosophical work, he thinks, is shoddy, vague, impatient and careless in checking details.

It may be protected from refutation by rhetorical techniques such as “pretentiousness, allusiveness, gnomic concision, or winning informality.” Williamson prefers philosophy that is patient, precise, rigorously argued, and carefully explained, even at the risk of seeming boring or pedantic. As he puts it, “Pedantry is a fault on the right side.”

An aspiration for philosophy

I think there’s something in this – an element of truth in Williamson’s analysis. Admittedly, the kind of work that he is advocating may not be easily accessible to the general educated public (although any difficulty of style would be from the real complexities of the subject matter, rather than an attempt to impress with a dazzling performance).

It’s also possible that there are other and deeper problems for philosophy that hinder its progress. Nonetheless, the discipline is marked by emotional investments in many proposed conclusions, together with characteristics that make it easy for emotionally motivated reasoners to evade refutation.

If we want to make more obvious progress in philosophy, we had better try to counter these factors. At a minimum that will involve openness to being wrong and to changing our minds. It will mean avoiding bluster, rhetorical zingers, general sloppiness and the protection that comes from making vague or equivocal claims.

This can all be difficult. Even with the best of intentions, we will often fail to meet the highest available standards, but we can at least try to do so. Imperfection is inevitable, but we needn’t indulge our urges to protect emotionally favoured theories. We can aspire to something better.

Politics, intellectual honesty, and discussion in the public square

There is one obvious area of discussion in modern democracies where the intellectual rigour commended by Williamson – which he sees as prevalent in the sciences and as a worthy aspiration for philosophers – is given almost no credence. I’m referring to the claims made by rivals in democratic party politics.

Here, the aim is usually to survive and prevail at all costs. Ideas are protected through sloppiness, rhetoric and even outright distortion of the facts, and opponents are viewed as enemies to be defeated. Purity of adherence to a “party line” is frequently enforced, and internal dissenters are treated as heretics. All too often, they are thought to deserve the most personal, microscopic and embarrassing scrutiny. It may culminate in ostracism, orchestrated smearing and other punishments.

This is clearly not a recipe for finding the truth. Whatever failures of intellectual dishonesty are shown by philosophers, they are usually very subtle compared to those exhibited during party political struggles.

I doubt that we can greatly change the nature of party political debate, though we can certainly call for more intellectual honesty and for less of the distortion that comes from political Manichaeism. Even identifying the prevalence of political Manichaeism – and making it more widely known – is a worthwhile start.

Greatly changing the nature of party political debate may be difficult because emotions run high. Losing may be seen as socially catastrophic, and comprehensive worldviews are engaged. By its very nature, this sort of debate is aimed at obtaining power rather than inquiring into the truth. Political rhetoric appeals to the hearts and minds – but especially the hearts – of mass electorates. It has an inevitable tendency in the direction of propaganda.

To some extent, we are forced to accept robust, even brutal, debate over party political issues. When we do so, however, we can at least recognise it as exceptional, rather than as a model for debate in other areas. It should not become the template for more general cultural and moral discussions – or even broadly political discussions – and we are right to protest when we see it becoming so.

It’s an ugly spectacle when party politics proceeds with each side attempting to claim scalps – demonizing opponents, attempting to embarrass them or to present them as somehow disgraced, forcing them, if at all possible, to resign from office – rather than seeking the truth.

It’s an even more worrying spectacle when wider debate in the public square is carried on in much the same way. We should be dissatisfied when journalists, literary and cultural critics, supposedly serious bloggers, and academics – and other contributors to the public culture who are not party politicians – mimic party politicians’ standards.

If anything, our politicians need to be nudged toward better standards. But even if that is unrealistic, we don’t have to adopt them as role models. Instead, we can seek standards of care, patience, rigour and honesty. We can avoid engaging in the daily pile-ons, ostracisms, smear campaigns, and all the other tactics that amount to taking scalps rather than honestly discussing issues and examining arguments. We can, furthermore, look for ways to support individuals who have been isolated and unfairly targeted.

High standards

At election time, we may have to vote for one political party or another, or else not vote (formally) at all. But in the rest of our lives, we can often suspend judgement on genuinely difficult issues. We can take intellectual opponents’ arguments seriously, and we can develop views that don’t align with any of the various off-the-shelf ones currently available.

More plainly, we can think for ourselves on matters of philosophical, moral, cultural and political controversy. Importantly, we can encourage others to do the same, rather than trying to punish them for disagreeing with us.

Party politicians are necessary, or at least they are better than any obvious alternatives (hereditary despots, anyone?). But they should never be regarded as role models for the rest of us.

Timothy Williamson asks for extremely high intellectual standards that may not be fully achievable even within philosophy, let alone in broader public discussion. We can, however, aspire to something like them, rather than indulging in the worst – in tribal and Manichaean – alternatives.

The Conversation

Russell Blackford is Conjoint Lecturer in Philosophy at University of Newcastle

This article was originally published on The Conversation. Read the original article.

Environmental philosophy conferences: To fly, or not to fly?

Can one justify, as an environmentally-minded philosopher, flying to conferences on environmental philosophy?
First, let me make clear that the issue of whether or not one takes individual actions, such as not flying, to ‘do one’s bit’ to help stop dangerous climate change, is of secondary importance. The primary issue is political: collective action is what is really needed if we are to do enough to stop manmade climate change. If I choose not to fly, the actual positive impact on the climate resulting from my decision may be less than small: it may even be zero (if it sends a tiny price signal, by reducing demand for fuel, that others then burn up more readily because it is slightly cheaper than it would otherwise have been). Whereas, if I get involved in a successful collective effort to rein in emissions (e.g. a successful international climate treaty), that effort will have a very large impact, a guaranteed impact that cannot be bypassed by others’ short-term self-interested economic behaviour.
The issue of whether or not one takes individual actions, such as not flying, to ‘do one’s bit’ to stop dangerous climate change, is then of secondary importance; but secondary importance is still a kind of importance. Furthermore, as an environmentally-minded philosopher, one needs to take a lead. Just as it was nauseating and self-defeating to see the world’s leaders flying into Copenhagen for that big famous failure of a climate conference, so the credibility of environmental philosophers is just inevitably somewhat tarnished if they turn up to their conferences by air.
And we need to show that another world is possible: we need to model doing things differently. (E.g. insisting on video-conferencing more, as I increasingly do; and helping to make this work.)
Which brings us back, and now directly, to the question that prompts this article: To fly, or not to fly?
One starting point for me, in relation to this difficult question, is to recall the Latin phrase Primum non nocere, “First, do no harm”, associated with the Hippocratic Oath. This dictum, as well as the moral prescriptions behind it, is taught to many doctors in medical school. The injunction of course does not bar them from (say) doing surgery. It certainly does bar them from doing unnecessary surgery. The thing that environmental philosophers need to ask themselves, if they are serious about fighting the war on dangerous climate change, is this: Is your journey really necessary?
There is a tremendous risk of self-deception here. It is so easy for human beings to think that what they are doing is very important, more so than what others are doing. One needs to ask oneself whether one can really be an environmental leader, and a morally self-respecting person, if one sends enough CO2 into the atmosphere to potentially injure or kill a present or future person. I am thinking here of the ground-breaking study by Craig Simmons et al laid out in the early chapters of The Zed Book, a study which should be much better-known than it is. It indicates that for every person currently living a high-carbon lifestyle, including flights etc, on average about 10 future people will suffer from manmade ‘natural’ disasters.
Environmental philosophy might change the world. The choices we as a civilization make really could depend on what wisdom we manage to achieve about ourselves and our place in the world. Does the end justify the means? Well, it certainly doesn’t if there is virtually no prospect of wisdom being achieved.
So those of us contemplating jetting off to a philosophy conference abroad really do need to ask ourselves how much good we would really be doing by going, and whether we can justify the harm that we are certainly responsible for if we go.
I do not say any of this lightly. I love conferences. I can’t do my job as a philosopher properly without going to some, even occasionally by air, although not as many and not as often as in the past. Conferences on climate and the environment could be of huge importance to our dwindling chances of saving ourselves as a civilisation. What’s needed is wisdom, and if philosophers lack the wisdom to help sustain our civilisation, then who has it?
But it does seem to me an extraordinary sign of the level of denial in relation to the climate crisis that hardly anyone seems to take the question of flying to conferences seriously
Let me give some examples. A few years ago, I said to the organisers of a conference in Florida on ‘Climate Philosophy’ that I wasn’t willing to fly to it. I hoped that we could organise my ‘giving’ my talk there via video-conference. They couldn’t manage this. To their credit, they did set up an audio-link for me to take questions, after someone else read my paper out.
Two summers ago I had a more discouraging experience. A Scandinavian environmental philosophy event later this year, ‘Climate Existence,’ was not even willing to consider my attending by remote means. It is depressing, when the organisers of a conference designed to look explicitly at how to stop ourselves climatically obliterating ourselves is not willing to consider how to minimise its own destructive impacts.
On the plus side, I will soon be ‘attending’ by video-conferencing facilities a conference in Copenhagen (yes, the very same Copenhagen!) where I will be giving a talk on environmental governance, just as 2 years ago I spoke ‘at’ a Conference in Australia on ‘Changing the climate: Utopia, dystopia and catastrophe’ (though on that occasion the skype malfunctioned and we were reduced to a video-link). And last year, I organised a very successful multiple-person video-link with a Conference at UEA, and an equally-successful Skype lecture beamed into UEA by Hilary Putnam.
The most surprising experience I had recently was arranging my attendance two years back at an EU event in Brussels on intellectual perspectives on biodiversity. The travel form assumed that I would be coming by plane! Of course, I went to that event by Eurostar. (If one can conveniently go to an environmental philosophy conference by train, then there is no excuse for plane-ing it.) What hope is there, if the organisers of an event on biodiversity – massively threatened by rising, dangerous emissions – do not even consider the possibility that international participants will come by means other than plane?
There is hope. Through technologies such as Skype and Oovoo, more and more people are getting used to video-conferencing as an effective way of interacting. I am hopeful that within a few years conference-organisers will be thinking of this, and it won’t be an awkward bolt from the blue when I say to them that I am keen to be there but preferably in electronic form.
To sum up, then. There are, of course, real losses if one chooses not to attend international conferences. Even if one does attend an event by means of new technology, there is no way of recreating by videoconference the feel, the informality, the networking opportunities that come from people being together in a place. As Jeremy Rifkin argues in his recent book, The Empathic Civilisation, the unprecedented dilemma that we face as a civilisation is how to expand our mutual empathy and concern, while reducing our entropic and environmentally-catastrophic impacts.
But certainly I think at least this: If philosophers do not ask themselves whether they can justify travelling to conferences by air, then who will?
My purpose in writing this piece would be served, if each reader were to ask themselves seriously the various questions that I have raised in the course of it. I close by briefly indicating the way that I try to answer them.
Aware of the above-mentioned tendency to self-deception, I endeavour to ask myself whether the benefit – I mean, a foreseen benefit in terms of philosophical advancement that may itself help people — for me and others of my attending a given conference by air are worth the down-side of the possible negative effect on future people of my doing so. I perform, in other words, a crude and rather imprecise utilitarian calculation, using the study by Simmons et al as an aide-memoire for the reality of the stakes. As noted above, the result of this is that I have drastically reduced my flying. Rather than being a habit and a norm, it has become a rare exception.

[[This is an updated version of a piece that appeared in THE PHILOSOPHER’S MAGAZINE a couple of years ago.]]

Scientism, Quietism and Continental Philosophy

Peter Unger was recently interviewed about his new book that critiques Analytic Philosophy, and in the interview he says a lot of things that plenty of Continental Philosophers would not disagree with. But his response is not to turn to Continental philosophy – not at all. Even Bertrand Russell is, in essence, too “Continental” in tone for Unger. He quotes Russell contemplating the value of philosophy as not something that seeks answers, because the questions of philosophy cannot be determinately answered, but rather as expanding the intellectual imagination, and then dismisses this as “nonsense.”

Unger’s reasoning seems to be that a test could be done to check how creative or dogmatic a person is, which presumably means that we could check whether studying philosophy does or does not enrich our intellectual imagination. This misses the point on two levels – we don’t do such tests so his argument is moot to start with, but more important, the idea is that those who grasp the value of philosophy will be affected by definition; those who don’t are misunderstanding its purpose.

We owe the word to Socrates, who distinguished between sophists, those who merely argue for the sake of it, and philosophers, lovers of wisdom. Socrates famously tells the story of his realization that the Oracle at Delphi may not have been wrong in proclaiming him the wisest man in Athens when he defines what it really means to be wise. He knows that he knows nothing while the other men think they have answers. To believe oneself to have things more figured out than everyone else – as Unger, it’s worth noting, repeatedly does – is a form of egotism disappointing to see in a mind meant to be devoted to the nature of being. One man’s capacities may exceed another’s when we are comparing everyday activities but when the ability at issue is the comprehension of the infinite, the significance is surely reduced. All our lives are short in comparison to the age of the universe.

Unger does mention the Ancients – he says “He [Kit Fine] has no more idea of what he’s doing than Aristotle did, and in Aristotle’s day there was an excuse: nobody knew anything”. This attitude shows his commitment to the scientistic point of view. He states at the outset of the interview that the goal of philosophy is to “write up deep stories which are true, or pretty nearly true, about how it is with the world. By that I especially mean the world of things that includes themselves, and everything that’s spatio-temporally related to them, or anything that has a causal effect on anything else, and so on.” Of course, a phrase like “and so on” may mislead, but it certainly does not sound as if Unger has any interest in questions of meaning or human experience. His dismissal of Ancient investigations as hopeless is particularly telling, though. What does it mean to claim that they “knew nothing”? In some ways, they were more aware of much that we’ve since forgotten – the rotation of the seasons, the placement of the stars, the behavior of animals or the preparation of foods that were common knowledge are now specialized or in some cases, just unavailable (e.g., consider light pollution in regards to the night sky). Being industrialized has increased technology but technology is not equivalent to knowledge – it’s just one form of knowledge.

Analytic philosophers who discover (after already becoming philosophers) that philosophy is not a form of science often propose that the answer is to give up philosophy altogether – turn out the lights and go home. Doing this as a book in the genre tends to seem a bit hypocritical, but then, the Analytic thinkers who do give it up will only have the chance to make the argument at cocktail parties. More worth addressing is the fact that Unger avoids mentioning the Continental approach at all. He suggests that philosophy may be “literature” for some, but what this means is unclear (beyond its implying a general worthlessness). From outside the Analytic tradition, philosophy is not the same as literature, but it’s the not the same as science either. It has its own category, as the exploration and contextualization of our place in the world.

As Emerson said, each age must write its own books. The wisdom of the past cannot be genetically infused into the next generation. Information is handed down, but true understanding has to be struggled through again and again, and grasped within each particular culture or time.

One last thought: The writer of the interview might think I’m recommending meditation and enlightenment, per the bookstore mentioned at the end of her piece. While I’m not, I think it’s worth bringing up that there are plenty of books in Western philosophy stores that are just as silly as those self-help texts look (was there one about Plato and a Platypus recently?), and Eastern texts that are worthwhile. Unger defines it as all the same in value (“nothing much”) while different in type (“this” vs “that”) whereas I would say it is the difference in value which is paramount; the types may blend together and overlap given that the subject is so great.

The Decline of Humanities

Head of Platon, roman copy. The original was e...

(Photo credit: Wikipedia)

One of the current narratives is that the humanities are in danger at American universities. Some schools are cutting funding for the humanities while others are actually eliminating majors and departments. At my own university, the college of arts and sciences was split apart with the humanities and soft sciences in one new college and the now exalted STEM programs in another. Not surprisingly, I was called upon (at a moment’s notice) to defend the continued existence of the philosophy and religion unit I head up. Fortunately, I could point to the fact that our classes regularly overload with students and the fact that our majors have been very successful.

While this narrative is certainly worrisome to faculty in the humanities, this is actually not a new narrative. For example, while about 7% of majors are in the humanities, this has been the case since the 1980s. As another example, humanities programs have been subject to cuts for decades. That said, there is clearly a strong current trend towards supporting STEM and cutting the humanities.

As might be suspected, the push to build up the STEM programs has contributed to the decline of funding for humanities programs. Universities and colleges have to allocate their funds and if more funds are allocated to STEM, this leaves less for other programs. There is also the fact that there is much more outside funding (such as from the federal government) for STEM programs. As such, STEM programs can find themselves getting a “double shot” of increased funding from the university and support from outside while humanities programs face reduced support from within the institutions and little or nothing from outside.

Those who argue for STEM over the humanities would make the case that STEM programs should receive more funding. If more students enroll in STEM than in the humanities, then it would clearly be fair that these programs receive more funding. If humanities programs want more funding, then they would need to take steps to improve their numbers.

There is also the argument based on the claim that funding STEM provides a greater return for the money in terms of job creation, educating job fillers and generating research that can be monetized. That is, STEM provides a bigger financial and practical payoff than the humanities. This would, clearly, serve to justify greater funding for STEM. Assuming, of course, that funding should be determined primarily in terms of financial and practical values defined in this manner. As such, if humanities programs are going to earn increased funding, they would need to show that they can generate value of a sort that would warrant their increased funding. This could be done by showing that the humanities have such practical and financial value or, alternatively, arguing that the humanities generate value of a different sort that is still worthy of funding.

Those in the humanities not only need to convince those who redistribute the money, they also need to convince students that the humanities are valuable. This need not require convincing students to major in the humanities—getting students to accept the value of the humanities to the degree that they will willingly enroll in such classes and support the programs that offer them.

It has long been a challenge to get students to accept the value of the humanities. When I was an undergraduate almost three decades ago most students looked down on the humanities and this has not changed. Now that I am a professor, honestly compels me to admit that most students sign up for my classes because they have to knock out some sort of requirement. I do manage to win some of these students over by showing them the value of philosophy, but many remain indifferent at best.

While it is a tradition to claim that things are worse now than they were when I was a youngster, this is actually the case. Recently, there has been a conceptual shift in regards to education: now the majority of students regard the main function of college as job preparation or as vocational training. That is, students predominantly see college as a machine that will make them into job fillers for the job creators.

Because of the nature of our economic system, most students do have to worry about competing in a very difficult job market and surviving in a system that is most unkind. As such, it is not unwise of students to take this very practical approach to education.

While it is something of a stereotype, parents do often worry that their children will major in the humanities and it is not uncommon for students to pressure their kids to major in something “useful.” When I was a student, people I knew said just that. Now that I am a professor, my students sometimes tell me that their parents are against them taking philosophy classes. While some are worried that their children will be corrupted, the main concerns are the same as that expressed by students: the worry that majoring in the humanities is a dead end and that the humanities requirements are delaying graduation and wasting their money.

Those of us in the humanities have two main options here. One is to make the case that the humanities actually do provide the skills needed to make it in the world of the job creators. While some regard philosophy as useless, an excellent case can be made that classes in philosophy can be very helpful in getting ready for employment. To use the most obvious example, philosophy is the best choice for those who are considering a career in law. This approach runs the risk of devaluing the humanities and just making them yet another form of job training.

The second is the usual argument from the humanities, which is based on the idea there is more to life than being a job filler for the job creators. The usual line of argument is that the humanities teaches students to address matters of value, to appreciate the arts, and to both think and question. This, as might be imagined, sounds good in principle but can be a very hard sell.

Unfortunately, humanities faculty often fail to convince students, parents and those who control the money that the humanities are valuable. Sometimes the failure is on the part of the audience, but often it is on the part of the faculty. As such, those of us in the humanities need to up our game or watch the shadow over the humanities grow.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Adjuncts & Walmart Workers

A recently renovated Walmart store in Clinton,...

(Photo credit: Wikipedia)

The September, 2013 issue of the NEA Higher Education Advocate featured an infographic comparing working at Walmart with working as an adjunct/contingent faculty member.  Having worked as an adjunct, I can attest to the accuracy of the claims regarding the adjunct experience.

In the usual order of things, a college degree provides a higher earning potential. This is not, however, true for the typical adjuncts. In the United States, a retail cashier makes an average of $9.13 an hour, resulting in a yearly income of $20,410. By way of comparison, Goldman Sachs’ health  coverage for a higher end employee (such as Ted Cruz’s wife) amounts to almost twice that amount. An adjunct who is working 40 hours a week will make on average $16,200 a year (which is $7.78 per hour). Running a cash register sometimes requires a high school degree, but not always. Being an adjunct typically requires having a graduate degree and many of them have doctorates. I did and I made $16,000 my first year as an adjunct. That was teaching four classes a semester for two semesters. Adjuncts generally do not get any benefits, although some of them do get insurance coverage—as graduate students. I had health insurance as a graduate student (at a very low rate) but not as an adjunct—fortunately I had no serious injuries and only minor illnesses during my insurance free time. If I had had my quadriceps tendon tear when I was an adjunct, it would have cost me almost $12,000—leaving me only $4,000 for the year (less after taxes).

The typical workers for corporations like Walmart tend to be no better off—they do not get much (or any) benefits and hence often do not have health care coverage. It might be wondered how people survive on such low wages and with no benefits. In some cases, people simply do without. When I was an adjunct, I did not have a car, I bought only what food I could afford, I lived in a one bedroom apartment and did all I could to live frugally. I do admit that I splurged on luxuries like running shoes and race entry fees. Fortunately, I did make some extra money writing—which helped support my gaming hobby.

This approach can work for a person who has no dependents, can get by without a vehicle, and has no health issues. However, those who cannot do the obvious: they turn to the state for aid. In the case of Walmart, the taxpayers provide support to their employees. For example, in the state of Wisconsin Walmart employees cost the taxpayers $9.8 million a year in Medicaid benefits alone. Adjuncts would also often qualify for state support. Out of Yankee pride, I did not avail myself of any such aid—I could survive on what I was making, albeit at a relatively low quality of life in Western terms. However, many people do not have the luxury of pride—they need to care for their families or address health issues.

As might be imagined, these low salaries and lack of benefits are a point of concern. Laying aside concerns about fairness of wages (which actually should not be done), there is the fact that the low pay of many workers is subsidized by the taxpayers. That is, the taxpayers pick up the difference between what the employers pay and what people need to survive. As I have argued before, this is a form of corporate and university socialism: the state support allows schools and corporations to pay low wages and thus generate greater profits. Or, in the case of non-profit schools, funnel the money elsewhere—most likely to administration and things like bonuses for the university president. For example, the previous president of my university was guaranteed a yearly bonus that that was about twice the average yearly adjunct salary.

Obamacare is supposed to, in some degree, shift the burden of health care costs from the taxpayer to the employer. The idea is that larger employers will need to provide health care benefits to full time employees or pay a fine. This, as might be imagined, has caused some people to threaten dire consequences. To be specific, some employers, including universities, have stated that they will reduce employee hours so that they fall just under the line for full time employment. Some have even threatened to fire people on the grounds that they cannot afford to pay.

One stock counter to the idea that employers should provide such benefits is that the state has no right to impose such costs on businesses, especially when doing so will cause businesses to fire people and cut their hours. This does have some appeal. However, there is still the question of who will provide the workers with the resources they need to survive.

One view is that the employers have an obligation to provide a living wage to those who do their job and do it competently. Few would argue that an employer is obligated to just hand people money for not working or doing terrible work—after all, a person who can earn his way should do so. As might be imagined, many employers (including universities) would rather not do this. After all, increasing wages to an actual living wage would cut into profits. In the case of universities, such increases would mean cuts in other areas of the budget (but surely not presidential bonuses).

Another view is that private citizens or organizations of private citizens (such as church groups) have the obligation to provide assistance to others via charity. That is, individuals should voluntarily subsidize the employers by providing the employees with the resources they need to survive, such as food. Of course, if private citizens have this obligation, it would seem that the employers (being citizens as well) would also have this obligation. One clever way around this is to contend that corporations are people, just not the sort of people who have moral obligations. Obviously, people do provide such support—but it would certainly be a challenge for private citizens to adequately support all the working people whose wages are not adequate.

A third view is that the state has the obligation to provide the resources for people to survive. This is, for the most part, the current situation. However, since the state gets most of its income from the citizens, this is effectively having private citizens subsidizing the employers, only with the state organizing the charity. Once again, if the state is obligated to do this, this merely comes down to the citizens having this obligation.

A fourth option is that no one has an obligation to provide people with the resources they need to survive, even when those people are actually working full time and generating enough value to allow their employer to pay them living wages. One might make references to the morality nullifying powers of the free-market: while people might have moral obligations, these do not hold in economic relations. One might also reject the idea that people have any such moral obligations to others at all: people must make it on their own or perish, unless someone freely decides to provide assistance.

Overall, it comes down to the question of what, if anything, people owe to each other. My own view is that the market does not nullify morality and that we do have obligations to each other. These obligations include an obligation to not allow other people to suffer or die simply because others are unwilling to pay them a fair, living wage. To head off the usual attacks, I am not claiming that able and competent people should simply be handed resources earned by the toil of others for doing nothing. Rather, my view is about fair wages and ethical behavior. This is why I am against both just handing people stuff for nothing and for people profiting off the labor of others. Both are cases of people who are getting the value of others’ work and not earning the value themselves.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Philosophically vicious

While I think there is a conceptual difference between doing philosophy and being a proper philosopher, I admit that people act as if they are substantially linked. In particular, when someone wants to accuse their intellectual arch-nemesis of being a non-philosopher, they will marshal a reliable collection of taunts or insults. The drama that ensues is usually tedious and not worth dwelling on, except for the fact that the insults that self-described philosophers level against each other actually tells us something about what they value most about philosophy. (And also, I suppose, because there is a small cottage industry in philosophy that is now dedicated to the conceptual analysis of naughty words. Recall Frankfurt on Bullshit, McGinn on Mindfucking, and Aaron James on Assholes.)

If you want to insult a self-described philosopher, you have to point to their vices. A vice is just a lonely virtue — the thing that makes traits virtuous is that they come in clusters. For example, if you have the gift of insight, but lack any other intellectual virtues, then you are a dogmatist.

As far as I can tell, ‘being philosophical’ involves the manifestation of two kinds of virtues: the right intentions (insightful belief, humble commitments), and the right reflective methods (rationality in thought, cooperation in conversation). One should expect that being philosophical means you should be able to manifest at least some of right intentions and at least some of the right ways. The aspiring philosopher must manifest the right intentions, but their work cannot be all about good intentions. By the same token, the aspirant must manifest some facility with the right methods, but the whole of their work cannot be confined to reflective methods. Philosophers actually have to help us do something, understand something.

In theory, some insults are grotesque offenses to the philosophical mind. No aspiring philosopher should want to be found guilty of being a dogmatist, worry-wart, puzzle-solver, or sycophant; if the definition of ‘philosopher’ ever countenances such habits of mind, then I will finally know that I have lost all sense of what the word means. There is a non-trivial possibility that I have never known what philosophy is, but I am comforted by the fact that I appear to be in good company. Recall the Gellner-Ryle spat, where variations on all four accusations show up in print. First, Russell admonishes Ryle for running the risk of turning Mind into “the mutual admiration organ of a coterie” (sycophancy); then GRG Mure of Oxford accuses practitioners of the OLP movement as being “long self-immunized to criticism” (dogmatism); and later Arnold Kaufner (Michigan) alludes to the possibility that the Oxford group as guilty of “precious cleverness” and “genteel subtlety” (puzzle-solvers) and “ritualistic caution” (worry-warts).

Untitled-1 copy

The problem with these sorts of insults is that they are so broad that when they are used by institutional peers the words will probably have no force. These insults mark out properties of persons which would be obvious if they were true, and hence would not usually even need to be asserted. Between institutional peers, the barb of an insult is most effective to the extent that it conforms to the facts, and the extent to which the assertion actually reveals something informative about those facts. People fall more in love with the subtler insults, ones that are grounded in the truth and in a potentially surprising way. The more intemperate and thoughtless your insults, the less people need to pay attention to you.*

Most readers are aware of the fact that during the 20th century there was a distinction between analytic philosophy and continental metaphysics. This distinction was based on innumerable factors, including substantive disagreements over particular viewpoints, and wide disagreement over who counted as an authority in philosophy. And that’s fine. But whatever the initial causes of the divide, it persisted in part because each side was able to caricature the other side as unphilosophical in one of the above ways. For analytic philosophers, continental metaphysicians were seen as romantic malcontents. (Recall Russell on existentialism: “It is from a mood of feeling oppressed that existentialism stages its rebellion against rationalism… The rationalist sees his freedom in a knowledge of how nature works; the existentialist finds it in an indulgence of his moods.”) Meanwhile, continental philosophers thought of analytic philosophers as methodology-obsessed and science-craven. (My use of the past tense is strategic but fanciful.)

***

Some people (let’s call them romantics) talk about philosophy as if it described the expression of deep and serious thoughts on some profound issue. The romantic approach to philosophy likes to think that the primary point of philosophy is to play with ideas, to enjoy the freedom to think. Arguments are not conceived as tools, but as a canvas, and the fruit of the argument comes from weaving out authentic interconnections. The artisan delights in the avant garde, and enjoys seeing what an experimental attitude towards philosophy might bring about.

But no matter how deep you think your beliefs are, no matter how humble you are in adopting them, and no matter how sincere you are in expressing them, you owe it to your readers to show how you could be wrong. As interesting as your deep thoughts may be, if your philosophy of life can’t be assessed in public, and if you take no part in that ongoing assessment, then it is not a part of your work as a philosopher and you’re not acting like much of a philosopher when you do it. Good intentions and deep insights are not enough to acquit a writer of using obscure jargon and dubious inferences. Anthony Kenny knew and collaborated with Jacques Derrida as a young man, but his final judgment on Derrida’s work is both fair and decisive: Derrida’s M.O. was to “introduce new terms whose effect is to confuse ideas that are perfectly distinct”.

Sometimes, people are unfairly targeted as romantics when in retrospect they ought to have been given a fair shake. Marshall McLuhan is one of the most famous Canadian intellectuals from the 20th century, and his work has undeniable insight and natural modesty. He is owed due credit as a futurist and media theorist, and I am sure philosophers could learn quite a lot from his work. But while I leave it to others to determine whether or not he was a proper philosopher, I expect few would. Certainly, today’s professional philosophers do not. Max Black (anticipating Harry Frankfurt) referred to McLuhan as one of his generation’s humbuggers. All the same, I cannot help but point out that McLuhan seems to have been philosophizing, at least in the generous historical sense that I am working with. While there is no attempt at rigor, there was usually a reasonable chain of inferences and engagement in a wider Humanities-wide conversation. Of course, his dictum “The medium is the message” was obtuse — but even so, the point he was trying to make was comparably interesting.

***

What holds for one extreme also holds for the other. If you say that philosophy is all about method — if, in other words, you are a scholastic intellectual technician— then it is hard to see how you could make any but the most perfunctory gestures to truth or understanding. When you ask someone who is obsessed with methodology why they do philosophy, they will explain to you the importance of trading of reasons for reasons, and how the rules of the philosophical game work. They will not answer a direct question, like “What consequence does this intellectual puzzle have to our lives?”. Instead, the inquiry will be treated as intrinsically valuable in the worst possible sense of the phrase. The technician is interested in getting to the heart of the ‘rules of chmess‘ thing once and for all, and we are unaffected by the effort.

Don’t be too hard on the technician. In all likelihood, the methods-obsessed soul has been appropriately traumatized by the most odious aspects of the philosophical culture, by pointless dogmatists and contrarians. You can hardly blame them for retreating to the safety and surety of intellectual Sudoku, any more than you can blame hobbits for keeping to the Shire.

The approach from method faces an additional burden, in that it does its part in stamping out philosophy as a distinctive and productive part of the Humanities. So, critics of modern analytic philosophy can ask the philosopher to show that reasoning from the armchair is both intellectually productive and distinctively non-scientific. Of course, it is now well-known that armchair methods are not always as productive as they seem. But it is also not obvious that armchair methods are distinctively philosophical. For, contrary to empiricist prejudices, quite a lot of good science could not be done unless we used some kind of aprioristic methods — be that in the form of mathematics, metaphysics, or modelling. Hence, in order to say something distinctive about philosophy, we have to talk about a productive and interesting part of the philosophical tradition that would be tough to sell as science. At least in the broader historical picture, intentional virtues are part of the philosopher’s real estate.

It is much more difficult to mention an example of a technician, in part because they are seldom remembered or celebrated after passing on. People bother to remember McLuhan, even if he was not even wrong, because it turns out that he had a thing to say and it was important that he said it. In contrast, empty refinements of method and their application to irrelevant and inconsequential subjects is not even ‘not even wrong’ — it is not even bullshit.

— BLSN

* Notice: this lesson only applies when it comes to exchanges between institutional peers. It is quite a different story if there are differences in power-relations, as John Kerry learned in 2004.

Meditations on contract faculty teaching philosophy

This post was written by Rational Hoplite in a recent thread. I thought it was worth sharing in its own right because it speaks to a major issue in the profession. — BLSN

A few years back I was lecturing (adjunct) at a local state university — a non-elite, non-ranking institution with mercifully generous admissions standards, and (hence) a student body fielded mainly from two smallish contiguous area codes. I myself did a semester there very many years ago before completing my undergraduate studies at an equally non-elite non-ranking university with equally charitable admissions policies, in one of the two aforementioned area codes.

This institution had but one “core requirement” philosophy course — an introduction to logic, which frog-marched the students across the badlands of modus ponens and modus tollens, categorical syllogisms, and logical fallacies. At the beginning of the course students sat an 80-question exam consisting of these topics, and at the end of the course sat a version of the same exam — similar ratios of question-types, but different phrasing. Performance on the exit-exam (we were told) could not count for less than 80% of the students’ final grade.

We were given rather a lot of lee-way as to how we delivered the content; and although there was predictable convergence, no two instructors taught the course the same way.

Once it became clear to me that this was the only philosophy class the undergrads were required to take, I took it upon myself to ensure we covered a few other things — among them, (1) an introduction to the main branches of philosophy, and how epistemology and logic are related; (2) a reading and discussion of The Euthyphro; (3) a discussion of the differences between knowledge, belief, and faith; and (4) a discussion of the difference between ‘training’ and ‘education’.

This last topic mattered to me, because of the nature of the course content, on the one hand, and the departmental parameters for assessment, on the other. I had scope to *train* students as I saw fit, to the end of ensuring they performed well on the exit-exam; but the generous latitude notwithstanding, there was very little space therein to advance one whit the students’ education — in the true sense of the word.

Since I used the first two weeks of the term to introduce students to the mood and method of philosophy – to make real for them, so far as possible, what “being philosophical” (about something) might mean, and how important it is that those we designated as “educated” (rather than “well-trained” or “degree-holding) have a philosophical attitude – students tended to leave the first fortnight of my lectures with precisely the sort of look we like our students to have at the end of the session. Students often lingered behind to chat, or follow-up with questions or comments; and even if only a few disclosed to me their symptoms, many showed signs of having been bitten by the bug. But it was very dispiriting to hear students leave the lectures of my colleagues, who – by staying squarely on-track – began their lectures with “All men are mortal…”, and thereafter faithfully plodded through their chosen textbook.

Not that there was anything at all wrong with that. But our students – many of whom should not have been at university, frankly – were, in their first month of their first semester, still looking for those things that would distinguish college from high school. Yomping around on the terra incognita of “If P, Q” on day-one of their first philosophy class ever wasn’t winning hearts and minds to the cause. (There seemed to be little point in discussing the etymology of ‘philosophy’ – which most of my colleagues seemed to do before “Socrates is a man” – if one was going to ignore the question “How does knowledge differ from wisdom?” and jump straight into validity.)

At the first faculty meeting (in October, five weeks into the term), the HoD asked how the new adjuncts were faring; and I – too prideful and stupid to know either my place or how one should respond to such questions from one’s new boss – dared to offer for discussion whether this “core requirement” was such a good idea, and ask of the assembled troops whether it seemed terrible to anyone else that the *one* chance we are guaranteed to make an early impression upon undergraduates is with BARBARA rather than Socrates.

The HoD and senior faculty were very kind and gracious in their response to my untimely meditations. It is how my queries were tabled, though, that is the point of this story.

I insisted – and quite possibly pounded the conference table – that it was our duty (I pray I did not say “solemn duty”) to have our students leave the classroom a little better than they were before they entered it. A little more curious. A little more skeptical. A little confused, perhaps – confused in that positive, productive sense – but certainly a little better than they were when they slammed down hard on the alarm clock and stumbled out of bed in the morning. All educators (I insisted) have this duty; but of all departments, and among all specialists, we more so than others — for if not the philosophers, then who?

“Well” chuckled the four-year-and-still-returning adjunct next to me, “I think you set your standards a little high”.

“Shall we aim instead leave them no-better-off, or worse-off?”, I responded.

I remained at the university for five consecutive semesters, and in the narrow space allotted me tried my best to ensure that my students were getting their “If P, Q” (etc.), but were also learning to expect more from themselves, and were engaging their other subjects with an inquisitive and critical eye — and interested in taking more philosophy courses. My enthusiasm for these simple objectives was manifestly not shared by tenured faculty, while the adjuncts were concerned that coloring outside of departmental lines might redound negatively upon them and injure their status within the guild.

I will tell you that am between forty and fifty years-old, and in no sense or context am I an old-timer. But when I return to my cache of books from the likes of Hocking, Muirhead, Sidgwick, Santayana, or Royce, or rummage through JSTOR archives or Google Scholar for early papers, I confront every time the feeling that philosophy is no longer what it was, and that something wonderful has been lost.

That sentiment, I know, is absurd. But I know, too, that The Guild is not what it was — or, it seems to be no longer what it seems to have been. The basic questions we ask, and enjoin our charges to ask with us, have not changed — or, have not changed very much. I think we all welcome additional questions, as we do new voices to our shared stoa (painted or unpainted).

But I would not mind a real renaissance of philosophy — not by way of new books or para-genres (A Philosopher’s Guide to Metallica on the shelves of Barns~Ignoble left me shuddering), but by way of a return to confidence that what we do is very important. Not for the Guild, or the Academy; not for “democracy” or “social justice”, or even for Western Civilization, or for any single such thing; but for all the good things that may yet be made possible by the courage of an unassuming undergrad from a non-elite, non-ranking state college, who – having become a little more philosophical than she was the month prior – one day finds herself prepared and confident to say: “Sorry, I don’t think that makes sense — and here’s why”. She will need some logic to identify the problem, and for her “here’s why” to be compelling; but she will need philosophy to know that making sense of nonsense matters.