Monthly Archives: May 2013

How can you call yourselves philosophers?

There’s an interesting, maybe even startling interview with Nigel Warburton in the current issue of The Philosophers’ Magazine.  You probably know Warburton from his many books and the podcast he does with David Edmonds, called Philosophy Bites.  When I spoke to him, he had only recently resigned from his post at the Open University.  The reasons he gives are striking.  Here’s a bit of it:

Warburton says he’s happy to see philosophy now being taught in schools, and he hopes it will one day encourage reasonable and serious discussion of not just religion but also a wide range of other issues. That’s largely where the value of philosophy lies for him. Not necessarily in getting at truths, but asking good questions, teasing out what we really think or ought to think, about subjects which make a difference to people.

He says that this kind of work just isn’t done in academic philosophy as it’s practiced today. Part of the problem has to do with “the cuckoo in the nest, the burgeoning managerial class” which he sees as getting in the way of philosophical thinking and teaching. Other trouble lies with departmental committees hiring people who are just like them, creating clusters of similar people with similar views. Just as with society at large, he says, diversity helps departments flourish, but often departmental members are all nearly carbon copies of one another. So in their work they end up trying to discriminate themselves from each other with more and more hair-splitting but ultimately uninteresting distinctions. He reserves particular venom for the REF, the Research Excellence Framework, a system of expert review which assesses research undertaken in UK higher education, which is then used to allocate future rounds of funding. A lot of it turns on the importance of research having a social, economic or cultural impact. It’s not exactly the sort of thing that philosophical reflection on, say, the nature of being qua being is likely to have. He leans into my recorder to make sure I get every word:

“One of the most disturbing things about academic philosophy today is the way that so many supposed gadflies and rebels in philosophy have just rolled over in the face of the REF – particularly by going along with the idea of measuring and quantifying impact,” he says, making inverted commas with his fingers, “a technical notion which was constructed for completely different disciplines. I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that? To think that funding in higher education in philosophy is going to be determined partly by people’s creative writing about how they have impact with their work. Just by entering into this you’ve compromised yourself as a philosopher. It’s not the kind of thing that Socrates did or that Hume did or that John Locke did. Locke may have had patrons, but he seemed to write what he thought rather than kowtowing to forces which are pushing on to us a certain vision, a certain view of what philosophical activities should be. Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”

Strong stuff.  You can read the whole article here:  ‘Nigel Warburton, Virtual Philosopher’.  Please do let me know what you think.

 

Working to the Rule in the Academy

State Seal of Maine.

State Seal of Maine. (Photo credit: Wikipedia)

The May 2013 issue of the NEA Higher Education Advocate featured “Working to the Rule in Maine” by Ronald J. Mosley, Jr. In his article, Mosley notes that the last time the faculty in Maine’s public higher education received a raise was four years ago. He also adds that the faculty have been working two years without a contract and that mediation and negotiation with the administration have failed. Oddly enough, this situation is not the result of a financial crisis: there have been three consecutive years of the highest annual surpluses in Maine history.  Mosley’s claims match those of my father, who just retired from the system this month.

As Mosley points out in his article, faculty have traditionally voluntarily engaged in service activities and often do so beyond what is actually expected (or contracted). While it might be tempting to some to dismiss this service as valueless,  faculty often contribute a great deal to their school and the community. In terms of schools, faculty often engage in service that is essential to the operation of the university and to the students, yet is not compensated. To use a concrete example, I am currently contracted at 20% to teach one summer class. That is all my Assignment of Responsibility (AOR) specifies. As such, I have no contractual obligation to do anything beyond that class. However, students still need advising in the summer and I have administrative tasks that need to be completed, such as serving on a critical accreditation committee and handling the various matters required to run the philosophy & religion program. In terms of the community, faculty also provide services to the professional and general community. To use one concrete example, I serve as an (unpaid) referee for professional journals and engage in other (unpaid) community service activities. These extra services, then are often rather important.

In normal conditions (although what is normal now seems to be rather abnormal) faculty willingly engage in such extra efforts “for the good of the_____” (insert “students”, “school”, “community” and so on as needed). There is also the fact that in better times, faculty are treated with some degree of respect and are reasonably well compensated and thus morale (and generosity) can be good.

However, as Mosley points out, conditions are not normal (or there is a “new normal”): while the cost of living increases yearly, there are few (or no) raises to even match this increase-thus faculty are effectively paid less each year.  There is also the fact that faculty are expected to do more each year. While Mosley does not mention this, the new obsession with assessment has added to faculty workloads and there seems to be a general trend in shifting more administrative burdens to faculty (while, bizarrely, the number of administrators and their salaries increase).  For example, I served on nine committees and ran a program review last year-all on top of my usual duties. As a final point, there is also the feeling that faculty are less (or not) respected. Such things rob faculty of motivation and income and this is certainly not good for the faculty, the schools or the students.

In response to the situation in Maine, Mosley has proposed that the faculty work to the rule. By this he means that the faculty should conscientiously complete the requirements of their contracts, but do not go beyond them. He does note that each faculty member should decide whether s/he will go beyond what is contracted. After all, students depend directly and indirectly on much of the “free” work done by faculty and many faculty members will want to work beyond the rule to avoid hurting the students. I suspect that some administrators count on this and perhaps cynically hold students “hostage” to get faculty to do uncompensated work. After all, most faculty will not refuse to advise students in the summer even if the summer contract includes no assigned advising duties.

Not surprisingly, I agree with Mosley (and not just because we are both Mainers). As a general rule, a contract is a contract and thus defines the obligations of the parties involved. While it might be nice of one or both parties to go beyond the contract, this is obviously not obligatory for either party. That is, just as the university is not obligated to drop some extra cash in my account just because it would be “for the good of Mike”, I am not obligated to put in extra hours “for the good of ___” when it is not in my AOR.

The usual counter to this view is that there are things that need to be done (such as advising or meeting the requirements for accreditation) that go beyond what is spelled out in the AOR. The rational counter to this is that things that need to be done should be added to the AOR and properly compensated. After all, if something is important enough to be done it would seem to be important enough to actually pay someone to do. If it is not important enough to pay someone to do it, then it would seem to be not worth doing.

As another point, it is worth reversing the situation. If a faculty member shirked his duties, it would be fair and just of the school to reduce his compensation, fire him or otherwise respond to such a failure to act in accord with the contract. But this would entail that the reverse is true: if the school decides to push the faculty member beyond the contract, then due compensation should be expected.

One way universities “get around” this is by having vaguely defined obligations for salaried faculty. Various other techniques are also used. For example, at my university the typical faculty member teaches four classes each semester. Each class counts as 20% of her work, thus leaving 20% for other duties. This 20% seems to be infinitely divisible-that is, no matter how many things are added to the 20%, it is always 20%. In the 2012-2013 academic year, my 20% included being the facilitator for the philosophy & religion unit, running the department web pages, advising, publishing, professional service, running the seven year program review and being a member of 9 committees.  Another faculty member might have her 20% consist of much less than my 20%, while another faculty member might (God forbid) have even more jammed in there.

Until recently, schools have been able to rely on the willingness of faculty to engage in extra work. While some of this was done in order to earn tenure, much of it was done out of a sense of obligation to the students and school and, perhaps, from a sense of being a valued member of a worthy community. However, this willingness seems to be eroding in the face of administrative decisions and attitudes. Interestingly, we might see the academy become rather like a business with each party sticking tenaciously to its contracted obligations and refusing to do more for the general good, since this notion has no place in the business model being exalted these days. Well, except as a tool used to milk free work from unwary faculty and staff.

Enhanced by Zemanta

Supererogation, Repetition and the Experience of Morning Coffee

An act is said to be supererogatory if it is “above and beyond the call of duty”. We might put this in less Kantian terms: a supererogatory act is one which generates moral value but whose execution is not demanded but praised. A moral agent, in failing to execute a supererogatory act, cannot reasonably be censured on the basis of this omission. Nobody would realistically have blamed Sydney Carton were he to have taken the view that “a far, far better rest”, could be deferred just a little longer.

We might wonder whether such an act is even possible. If an act generates moral value then we might wish to argue that ipso facto we are obliged to do it. The alternative might seem to acknowledge at least a dualistic account of moral value: that certain values demand their instantiation whereas certain others do not. On what basis could such a discrimination be made? A many-valued theory of value might seem to beckon. Especially if -as seems reasonable- we wish to agree that a flourishing moral conversation requires that we are able to describe certain acts and agents as being deserving of praise.

The situation seems to be further complicated by the issue of repetition. Supererogatory acts, when repeated, seem to lose lustre. What is exceptional, and therefore praiseworthy, can become quotidian, and thereby expected. I found this recently when, having moved home, I established a morning coffee routine in the local cafe. The first few times I was in there I decided to save the waitress some time and returned my used coffee cups etc to the counter before leaving. After about two weeks I noticed that the waitress (and it is the same waitress) was lingering over her morning newspaper, seemingly unperturbed by the detritus littering my table (by this time I had taken to ordering several coffees, partly to test her out). By week four she was clearing the tables around me, but leaving mine untouched.

Now I accept that as supererogatory acts go, returning the coffee cups did not involve an excess of spiritual expenditure. It was not Sydney Carton-esque, in that sense. But small, local kindnesses are important too, either as rehearsals for the big league or because for many of us they are about as much as we can offer. But what had happened here was that my act, through repetition, had been denuded of its supererogatory character. What had begun as an act of generosity had been transformed into an obligation. And this transformation was not simply a matter of the waitress perceiving the situation in the wrong way, since in that case I would not feel the obligation also, yet I did. When I described the experience to my brother, who has a gift for taxonomy, he categorised it as a “Larry David” situation. I like to think deeper issue are in play.

But what deeper issues? Discussions of supererogation tend to loudly affirm the separation of act from agent but in examples such as the above the character of the act seems clearly to supervene in some way on the mental states of agents and observers alike. The lesson seems to be this: you repeat a supererogatory act at your peril since pretty soon you will notice that the act flips from “good to do but not bad not to do” to “bad not to do but not good to do” without even a passing acquaintance with “neither good to do nor bad to do”. Or to put it another way: if you’re going to throw yourself on the unexploded hand grenade, best not do it more than once.

Critical Thinking & College

Inquiry: Critical Thinking Across the Disciplines

(Photo credit: Wikipedia)

With the ever increasing cost of college education there is ever more reason to consider whether or not college is worth it. While much of this assessment can be in terms of income, there is also the academic question  of whether or not students actually benefit intellectually from college.

The 2011 study Academically Adrift showed that a significant percentage of students received little or no benefit from college, which is obviously a matter of considerable concern. Not surprisingly, there have been additional studies aimed at assessing this matter. Of special concern to me is the claim that a new study shows that students do improve in critical thinking skills. While this study can be questioned, I will attest to the fact that the weight of evidence shows that American college students are generally weak at critical thinking. This is hardly shocking given that most people are weak at critical thinking.

My university, like so many others, has engaged in a concerted effort to enhance the critical thinking skills of students. However, there are reasonable concerns regarding the methodology used in such attempts. There is also the concern as to whether or not it is even possible, in practical terms, to significantly enhance the critical thinking skills of college students over the span of the two or four (or more) degree.  While I am something of an expert at critical thinking (I mean actual critical thinking, not the stuff that sprung up so people could profit from being “critical thinking” experts), my optimism in this matter is somewhat weak. This is because I have given due consideration to the practical problem of this matter and have been teaching this subject for over two decades.

As with any form of education, it is wise to begin by considering the general qualities of human beings. For example, if humans are naturally good, then teaching virtue would be easier. In the case at hand, the question would be whether or not humans (in general) are naturally good at critical thinking.

While Aristotle famously regarded humans as rational animals, he also noted that most people are not swayed by arguments or fine ideals. Rather, they are dominated by their emotions and must be ruled by pain. While I will not comment on ruling with pain, I will note that Aristotle’s view about human rationality has been borne out by experience. To fast forward to now, experts speak of the various cognitive biases and emotional factors that impede human rationality. This matches my own experience and I am confident that it matches that of others. To misquote Lincoln, some people are irrational all the time and all the people are irrational some of the time. As such, trying to transform people into competent  critical thinkers will generally be very difficult, perhaps as hard as making people virtuous.

In addition to the biological foundation, there is also the matter of preparation. For most students, their first exposure to a substantial course or even coverage of critical thinking occurs in college. It seems unlikely that students who have gone almost two decades without proper training in critical thinking will be significantly altered by college. One obvious solution, taken from Aristotle, is to begin proper training in critical thinking at an early age.

Another matter of serious concern is the fact that students are exposed to influences that discourage critical thinking and actually provide irrational influences. One example of this is the domain of politics. Political discourse tends to be, at best rhetoric, and typically involves the use of a wide range of fallacies such as the straw man, scare tactics and ad hominems of all varieties. For those who are ill-prepared in critical thinking, exposure to these influences can have a very detrimental effect and they can be led far away from reason. I would call for politicians to cease this behavior, but they seem devoted to the tools of irrationality. There is a certain irony in politicians who exploit and encourage poor reasoning being among those lamenting the weak critical thinking skills of students and endeavoring to blame colleges for the problems they themselves have helped create.

Another example of this is the domain of entertainment. As Plato argued in the Republic,  exposure to corrupting influences can corrupt. While the usual arguments about corruption from entertainment  focus on violence and sexuality, it is also important to consider the impact of certain amusements upon the reasoning skills of students.  Television, which has long been said to “rot the brain”, certainly seems to shovel forth fare that is hardly contributing to good reasoning. While I would not suggest censorship, I would encourage students to discriminate and steer clear of shows that seem likely to have a corrosive impact on reasoning. While it might be an overstatement to claim that entertainment can corrode reason, it does seem sensible to note that much of it contributes nothing positive to a person’s mind.

A third example of this is advertising. As with politics, advertising is the domain of persuasion. While good reasoning can persuade, it is (for most people) the weakest tool of persuasion. As such, advertisers flood us with ads employing what they regard as effective tools of persuasion. These typically involve various rhetorical devices and also the use of fallacies. Sadly, the bad logic of fallacies is generally far more persuasive than good reasoning. Students are generally exposed to significant amounts of advertising (they no doubt spend more time exposed to ads than critical thinking) and it makes sense that this exposure would impact them in detrimental ways, at least if they are not already equipped to properly assess such ads with critical thinking skills.

A final example is, of course, everyday life. Students will typically be exposed to significant amounts of poor reasoning and this will have a significant influence on them. Students will also learn what the politicians and advertisers know: the tools of irrational persuasion will serve them better in our society than the tools of reason.

Given these anti-critical thinking influences, it is something of a wonder that students develop any critical thinking skills.

My Amazon Author Page

Enhanced by Zemanta

On warranted deference

By their nature, skeptics have a hard time deferring. And they should. One of the classic (currently undervalued) selling points for any course in critical thinking is that it grants people an ability to ratchet down the level of trust that they place in others when it is necessary. However, conservative opinion to the contrary, critical thinkers like trust just fine. We only ask that our trust should be grounded in good reasons in cooperative conversation.

Here are two maxims related to deference that are consistent with critical thinking:

(a) The meanings of words are fixed by authorities who are well informed about a subject. e.g., we defer to the international community of astronomers to tell us what a particular nebula is called, and we defer to them if they should like to redefine their terms of art. On matters of definition, we owe authorities our deference.

(b) An individual’s membership in the group grants them prime facie authority to speak truthfully about the affairs of that group. e.g., if I am speaking to physicists about their experiences as physicists, then all other things equal I will provisionally assume that they are better placed to know about their subject than I am. The physicist may, for all I know, be a complete buffoon. (S)he is a physicist all the same.

These norms strike me as overwhelmingly reasonable. Both follow directly from the assumption that your interlocutor, whoever they are, deserve to be treated with dignity. People should be respected as much as is possible without doing violence to the facts.

Here is what I take to be a banal conclusion:

(c) Members of group (x) ought to defer to group (y) on matters relating to how group (y) is defined. For example, if a philosopher of science tells the scientist what counts as science, then it is time to stop trusting the philosopher.

It should be clear enough that (c) is a direct consequence of (a) and (b).

Here is a claim which is a logical instantiation of (c):

(c’) Members of privileged groups ought to defer to marginalized groups on matters relating to how the marginalized group is defined. For example, if a man gives a woman a lecture on what counts as being womanly, then the man is acting in an absurd way, and the conversation ought to end there.

As it turns out, (c’) is either a controversial claim, or is a claim that is so close to being controversial that it will reliably provoke ire from some sorts of people.

But it should not be controversial when it is understood properly. The trouble, I think, is that (c) and (c’) are close to a different kind of claim, which is genuinely specious:

(d) Members of group (x) ought to defer to group (y) on any matters relating to group (y).

Plainly, (d) is a crap standard. I ought to trust a female doctor to tell me more about my health as a man than I trust myself, or my male barber. The difference between (d) and (c) is that (c) is about definitions (‘what counts as so-and-so’), while (d) is about any old claim whatsoever. Dignity has a central place when it comes to a discussion about what counts as what — but in a discussion of bare facts, there is no substitute for knowledge.

**

Hopefully you’ve agreed with me so far. If so, then maybe I can convince you of a few more things. There are ways that people (including skeptics) are liable to screw up the conversation about warranted deference.

First, unless you are in command of a small army, it is pointless to command silence from people who distrust you. e.g., if Bob thinks I am a complete fool, then while I may say that “Bob should shut up and listen”, I should not expect Bob to listen. I might as well give orders to my cat for all the good it will do.

Second, if somebody is not listening to you, that does not necessarily mean you are being silenced. It only means you are not in a position to have a cooperative conversation with them at that time. To be silenced is to be prevented from speaking, or to be prevented from being heard on the basis of perverse non-reasons (e.g., prejudice and stereotyping).

Third, while intentionally shutting your ears to somebody else is not in itself silencing, it is not characteristically rational either. The strongest dogmatists are the quietest ones. So a critical thinker should still listen to their interlocutors whenever practically possible (except, of course, in cases where they face irrational abuse from the speaker).

Fourth, it is a bad move to reject the idea that other people have any claim to authority, when you are only licensed to point out that their authority is narrowly circumscribed. e.g., if Joe has a degree in organic chemistry, and he makes claims about zoology, then it is fine to point out the limits of his credentials, and not fine to say “Joe has no expertise”. And if Petra is a member of a marginalized group, it is no good to say that Petra has no knowledge of what counts as being part of that group. As a critical thinker, it is better to defer.

[Edit: be sure to check the comments thread for great discussion!]

Philosophy Carnival #151

Is now online, over at Camels with Hammers. Check it out here.

Violence & Video Games, Yet Again.

Manhunt (video game)

(Photo credit: Wikipedia)

While there is an abundance of violence in the real world, there is also considerable focus on the virtual violence of video games. Interestingly, some people (such as the head of the NRA) blame real violence on the virtual violence of video games. The idea that art can corrupt people is nothing new and dates back at least to Plato’s discussion of the corrupting influence of art. While he was mainly worried about the corrupting influence of tragedy and comedy, he also raised concerns about violence and sex. These days we generally do not worry about the nefarious influence of tragedy and comedy, but there is considerable concern about violence.

While I am a gamer, I do have concerns about the possible influence of video games on actual behavior. For example, one of my published essays is on the distinction between virtual vice and virtual virtue and in this essay I raise concerns about the potential dangers of video games that are focused on vice. While I do have concerns about the impact of video games, there has been little in the way of significant evidence supporting the claim that video games have a meaningful role in causing real-world violence. However, such studies are fairly popular and generally get attention from the media.

The most recent study purports to show that teenage boys might become desensitized to violence because of extensive playing of video games. While some folks will take this study as showing a connection between video games and violence, it is well worth considering the details of the study in the context of causal reasoning involving populations.

When conducting a cause to effect experiment, one rather important factor is the size of experimental group (those exposed to the cause) and the control group (those not exposed to the cause). The smaller the number of subjects, the more likely that the difference between the groups is due to factors other than the (alleged) causal factor. There is also the concern with generalizing the results from the experiment to the whole population.

The experiment in question consisted of 30 boys (ages 13-15) in total. As a sample for determining a causal connection, the sample is too small for real confidence to be placed in the results. There is also the fact that the sample is far too small to support a generalization from the 30 boys to the general population of teenage boys. In fact, the experiment hardly seems worth conducting with such a small sample and is certainly not worth reporting on-except as an illustration of how research should not be conducted.

The researchers had the boys play a violent video game and a non-violent video game in the evening and compared the results. According to the researchers, those who played the violent video game had faster heart rates and lower sleep quality. They also reported “increased feelings of sadness.”  After playing the violent game, the boys  had greater stress and anxiety.

According to one researcher, “The violent game seems to have elicited more stress at bedtime in both groups, and it also seems as if the violent game in general caused some kind of exhaustion. However, the exhaustion didn’t seem to be of the kind that normally promotes good sleep, but rather as a stressful factor that can impair sleep quality.”

Being a veteran of violent video games, these results are consistent with my own experiences. I have found that if I play a combat game, be it a first person shooter, an MMO or a real time strategy game, too close to bedtime, I have trouble sleeping. Crudely put, I find that I am “keyed” up and if I am unable to “calm down” before trying to sleep, my sleep is generally not very restful. I really noticed this when I was raiding in WOW. A raid is a high stress situation (game stress, anyway) that requires hyper-vigilance and it takes time to “come down” from that. I have experienced the same thing with actual fighting (martial arts training, not random violence).  I’ve even experienced something comparable when I’ve been awoken by a big spider crawling on my face-I did not sleep quite so well after that. Graduate school, as might be imagined, put me into this state of poor sleep for about five years.

In general, then, it makes sense that violent video games would have this effect-which is why it is not a good idea to game up until bed time if you want to get a good night’s sleep. Of course, it is a generally a good idea to relax about an hour before bedtime-don’t check email, don’t get on Facebook, don’t do work and so on.

While not playing games before bedtime is a good idea, the question remains as to how these findings connect to violence and video games. According to the researchers, the differences between the two groups “suggest that frequent exposure to violent video games may have a desensitizing effect.”

Laying aside the problem that the sample is far too small to provide significant results that can be reliably extended to the general population of teenage boys, there is also the problem that there seems to be a rather large chasm between the observed behavior (anxiety and lower sleep quality) and being desensitized to violence. The researchers do note that the cause and effect relationship was not established and they did consider the possibility of reversed causation (that the video games are not causing these traits, but that boys with those traits are drawn to violent video games).  As such, the main impact of the study seems to be that it got media attention for the researchers. This would suggest another avenue of research: the corrupting influence of media attention on researching video games and violence.

My Amazon Author Page

Enhanced by Zemanta

The science-philosophy connection

In this article published in the Guardian, the theoretical physicist Michael Krämer says all the right things about the connection between science and philosophy. Here’s a brief summary. He points out that, up until the middle of the twentieth century or so, scientists profited from philosophy. He also points out that post-war physicists do not find much to gain from philosophy, presumably referring to philosophy of science and its cognates. (Actually, this point is not exactly right. e.g., It is difficult to imagine Bohm‘s research project unmoored from his holistic ontological convictions. But I digress.)

From this, one might be tempted to heap scorn on philosophy. One might say we ought to just stop doing theoretical philosophy, since what it gets right is not distinctively philosophical, and what is distinctively philosophical is not right.

Refreshingly, Krämer does not travel this route. He acknowledges that philosophy crafts its arguments around certain general kinds of questions, and hence enjoys a degree of disciplinary autonomy — but also that it is ultimately studying the very same universe that the physicists are, and hence that it overlaps significantly with science. Krämer’s conclusion is even-handed. He concludes that the physicists can benefit from listening to the philosophers only so long as the philosophers keep focused on providing a critical understanding how the actual scientific methods are used. In contrast, if philosophers spend their time making armchair pronouncements about what counts as science, they ought not be listened to.

Like I said, I think Krämer’s got it right, and I think he said it well. And, I might add: my goodness, do philosophers need to hear it. Many of my colleagues and mentors are both actively involved in philosophy and in specialized sciences. They are, to a person, well acquainted with how things go on both sides of the fence, equally comfortable in graduate courses in cognitive science as they are in courses on philosophy of mind, or in courses on anti-realism as they are on theoretical physics. Yet I am heartbroken to hear that their work is often dismissed by reviewers in philosophy journals who have a simplistic normative conception of ‘how science works’. Instead of researching the diversity of methods that scientists actually use, many commentators working in the philosophy of science are interested in policing the boundaries of science through normative fiat. As a result, my colleagues have their papers accepted in top-notch science journals, and turned down by ostensibly top-notch philosophy journals.

One day, the philosophy of science may turn out to be of great importance to science. But that day will not come until philosophers prove themselves willing and able to read the contents of a bibliography.

Though I think Krämer has got most of it right, I do think that he has got one thing wrong. The fact is, the quips offered by theoretical physicists do not, by themselves, tell us anything about the relationship between philosophy and the sciences. It may be agreed that physics is the most developed among the sciences, and it may also be the case that all sciences will need to cash themselves out in physicalist terms. But even having admitted that much, it should also be agreed that physics is not the spokesperson of all the sciences — natural or otherwise. Even if it were true, you cannot conclude from the fact that ‘physicists don’t need theoretical philosophy quite so much anymore’ that ‘science doesn’t need theoretical philosophy quite so much anymore’. Mind you, it may indeed be the case that philosophy has nothing to say to any of the sciences. My point is that this inference needs to be demonstrated, and cannot be inferred from a single exceptional and arbitrarily selected historical period.

Nussbaum on philosophy, art, emotion, etc

It’s Martha Nussbaum’s birthday, or anyway it was yesterday, and to celebrate, here’s part of an interview she did with The Philosophers’ Magazine a year or two ago.  She talks about the role of philosophy, the importance of the liberal arts, poetry, emotion, Mill, and, just a little bit, what it means to live a flourishing life.  Nussbaum has more thoughts per minute than most people.  Interesting stuff.

Nussbaum has something to say about the role of argument and philosophy in liberal arts education – an entire chapter of her book Not for Profit is devoted to it. She discusses the importance of Socratic pedagogy, questions, self-scrutiny, understanding rather than memorisation, critique, and debate. I wonder if this actually devalues philosophy in a backhanded way, reduces it to a mere means to good citizenship?

“Philosophy is constitutive of good citizenship. It’s not just a means to it. It becomes part of what you are when you are a good citizen – a thoughtful person. Philosophy has many roles. It can be just fun, a game that you play. It can be a way you try to approach your own death or illness or that of a family member. It has a wide range of functions in human life. Some of them are connected to ethics, and some of them are not. Logic itself is beautiful. I’m just focusing on the place where I think I can win over people, and say ‘Look here, you do care about democracy don’t you? Then you’d better see that philosophy has a place.’”

Philosophy has a place not just in keeping democracy alive. Nussbaum argues that a liberal arts education – and philosophy in particular – is important for a meaningful life. We need philosophy, she says, to criticise and analyse, but also to help us make sense of our inner lives – our feelings and attitudes towards one another. That’s part of what it is to live a flourishing life.

“It’s a fast-moving world. There are all kinds of reasons not to look within. Peer cultures, teen age cultures particularly, are so competitive that they discourage looking in, thinking, ‘What am I feeling now? What are the names for this cascade of emotions that I’m going through?’ When you go out into life all sorts of disturbing things happen. You love people and that doesn’t always go smoothly. You have children and that brings with it a complicated set of emotions and relationships. You have to confront illness and mortality, both your own and that of people you love. In all those situations you need to be able to look within and understand what you’re feeling.

“Mill understood from his own experience of depression that being able to read poetry – to think about emotions in connection with a work of literature – was a tremendous part of the cultivation of the inner world. It makes you capable of love and happiness and stability.

“Philosophy tells you that you had better look within. Philosophy the way I do it is closely linked to literature and the imagination. For example, when you’re dealing with philosophical accounts of emotion, how could you think philosophically about them without having powerful examples of what they’re like?

“I’m with Mill in thinking that education with respect to the emotions has to have an aesthetic component. He’s funny about this. He says in England we don’t understand this because we think life is all about making money. There is also the legacy of Puritanism. We think there’s something evil about experiencing emotion in connection with works of art. The result is that we become narrow-minded and ungenerous. We have a strict moral conscience but little sympathy with others.

“That’s right about a lot of people in a lot of places and times. The rigidity of conscience without the capacity for sympathy and love can do great damage when you’re a parent or a friend or a lover. So it’s important for a meaningful life to read about and think about works of art.”

Motives for Terror

MQ-1L Predator UAV armed with AGM-114 Hellfire...

(Photo credit: Wikipedia)

After the evil and senseless bombing in Boston, there was considerable speculation about the motives of the bombers. Not surprisingly, some folks blamed their preferred demons: some on the left leaped to conclusions involving right-wingers while those on the right leaped to conclusions involving Islam.  As it turns out, the alleged murderers have a connection to Islam.

While some hold the view that there is a strong causal connection between being a Muslim and being a terrorist, the connection obviously cannot be that strong. After all, the vast majority of Muslims do not engage in terrorism. As such, beginning and ending the discussion of the motive for terror with Islam is not adequate.

When it comes to terrorist attacks against the United States, the stock explanation is that the terrorists are motivated by a hatred of our freedom. A common variation on that is that they hate democracy. Another explanation is that they simply hate the United States and other countries.

The explanation that terrorists are motivated by a hatred of our freedom (or democracy) does two main things. The first is that it casts the terrorists as enemies of freedom and democracy, thus presenting them as having evil motives. The second is that it casts the United States and its allies as being attacked because of their virtues. Crudely put, the bad guys are attacking us because they hate what is good.

The explanation that the terrorists simply hate the United States and its allies also does two main things. The first is that it casts the terrorists as simply being haters without any justification for their hate. The second is that it casts the United States and its allies as innocent targets. Crudely put, the haters are attacking us because they are haters.

In both of these approaches, the United States and its allies are presented as innocent victims who are being attacked for wicked or irrational reasons. What certainly helps support this narrative is that the terrorists engage in acts that are wicked and certainly seem irrational. After all, the people who are killed and injured are usually just random innocents who simply happen to be in the blast area at the time. Because of this, it is correct to condemn such terrorists as morally wicked on the grounds that they engage in indiscriminate violence. However, the fact that the direct victims of the terrorists are generally innocent victims of wicked deeds does not entail that the terrorists are motivated to attack innocent countries because they hate us, our freedom or our democracy.

One significant source of evidence regarding the motivation of terrorists is the statements terrorists make regarding their own reasons. In the case of the alleged Boston bomber, he claims that he was motivated by the United States’ wars in Iraq and Afghanistan.  In the case of other terrorists, they have generally claimed they are motivated by the actions of the United States and its allies.

My point here is not to justify the actions of the terrorists. Rather, the point is that the terrorists do not claim to be motivated by the reasons that have been attributed to them. That is, they do not regard themselves as being driven to attack us because they hate our freedom or democracy. They do often claim to hate us, but for rather specific reasons involving our foreign policy. As such, these stock explanations seem to be in error.

It might be countered that the terrorists are lying about their motivations. That is, that they are really driven by a hatred of our freedom or democracy and are just claiming that they are motivated by our foreign policy and associated actions (like invading countries and assassinating people with drones) for some devious reason.

The obvious reply to this is that if terrorists were motivated by a hatred of freedom or democracy, they would presumably attack countries based on their degree of freedom or democracy. Also, a non-stupid terrorist would take into account the ease of attacking a country and what the country could and would do in response. Hitting the United States to strike against freedom or democracy would thus be a poor choice, given our capabilities and how we respond to such attacks (invasions, drone strikes and so on).  To use an analogy, if someone hated athletes, it would not be very sensible to get into a fist fight with a professional mixed martial artist when one could go beat up a marathon runner (who is not also a martial artist).

It might be countered that the United States is the symbol for freedom and democracy, hence the terrorists want to attack the United States even though they know that this will result in retaliation of the sort that many other democratic states cannot or would not engage in.

While this is not impossible, the more plausible explanation is that the terrorists are motivated by their hatred of our foreign policy. After all, invasions, assassinations and such tend to motivate people to engage in violence far more so than some sort of hatred of freedom or democracy.

It might, of course, be wondered why the motivation of terrorists matter. What matters is not why they try to murder people at a marathon but that they try to do such things.

While what they do obviously matters, why they do it also matters. While I obviously believe that terrorism of the sort that took place in Boston is evil, this does not entail that there are no legitimate grievances against the United States and its allies in regards to our foreign policies. To use an analogy, if Bob blows up Sam’s whole family because Sam killed Bob’s son, then Bob has acted wrongly. But this does not prove that Sam acted rightly in killing Bob’s son. In the case of the United States, the fact that we have been attacked by terrorists does not thus make our invasions or drone assassinations right. Now, it might turn out that our actions are right, but we cannot infer that they are just because terrorists do terrible things.

Sorting out what motivates terrorists is also rather useful in trying to prevent terrorism. If we assume they are motivated by their hatred of our freedom or democracy, then we would have to abandon our freedom or democracy to remove their motivation. This is obviously something that should not be done.

However, if some terrorists are motivated by specific aspects of our foreign policy (such as drone strikes that kill civilians), then it seems well worth considering whether we should change these policies. To use an analogy, if someone keeps trying to attack me because I am virtuous, then I obviously should not abandon my virtues just to stop these attacks. But if someone keeps trying to attack me because I keep provoking him, then I should consider whether or not I should be doing those things. It might turn out that I am in the right, but it might turn out that I am in the wrong. If I am in the wrong, then I should change. But if he is in the wrong, then I would be warranted in not changing (but I would need to be honest about why he is attacking me). For example, if he goes after me because I am stealing his newspaper and dumping leaves in his yard, then I should probably stop doing that. As another example, if he is going after me because I run past his house, then he should stop doing that.

The same would seem to apply to terrorists. If we are engaged in unjust actions that provoke people, then we should stop those actions. If, however, we are acting justly and this provokes people, then we should continue to the degree those actions are warranted and necessary. But we should be honest about why they area attacking us.

Enhanced by Zemanta