Monthly Archives: October 2007

Reasonable People Will Disagree

the_big_two_democrats_obama_and_hillary.jpg It’s a tricky thing bringing a debate to a conclusion without discord. You can “agree to disagree,” but that leaves you and your opponent on separate boats, drifting away from each other. Another parting thought is that “reasonable people will disagree.” You stick to your guns, but offer respect to your opponent–what more could anyone want?

Sadly enough, an article I’ve just finished makes me wonder if there’s anything to this platitude. In his contribution to the essay collection Philosophers without Gods, Fred Feldman says No. If you really disagree, how can you think your opponent is reasonable?

Sure, you could think so in other respects. You could think—this person is reasonable on the whole. Or you could chock up the disagreement to a difference in the evidence available to him or her. But can you really think someone’s dealing reasonably with exactly the issues and evidence at hand, but reaching a different conclusion?

Feldman points out how odd this would be if you were serving on a jury. You’re trying to decide if the defendant is guilty “beyond a reasonable doubt.” You say Yes, another jury member says No. If you really thought the other person arrived at No reasonably, he argues, you would have to change your verdict, not go on disagreeing.

We tend to say “reasonable people will disagree” at the end of debates that make us uncomfortable, like debates about religion. We want to end on a note of mutual acceptance. But if we’re sincere about the other person’s reasonableness, we really ought to end on a note of uncertainty or even alter our stance. So he says.

I find this all fairly convincing, and think there are many cases where Feldman’s right—if we really find an opponent reasonable, the only thing to do is move closer to his or her position or suspend judgment. But aren’t there some instances of “reasonable disagreement”? If everyone studying (say) ethics were reasonable, would they actually all have the same views or at least become equally non-committal?

I keep searching around for a good counterexample to Feldman’s view, but they all tend to get complicated very fast. The more mundane the example, the easier it is to explore—so how about this?—

I’m debating with my friend S. about who would make the best Democratic candidate, Hillary Clinton or Barak Obama. I have a list of Hillary plusses and S. has a list of Barak plusses. Everything on each list really does strike us both as a plus. But they are different sorts of things, hard to sum. When I do the summing, my answer is Hillary. When S. does the summing, her answer is Barak.

Now, if S. does her summing really strangely, I can see thinking she’s unreasonable. For example, if she gives huge weight to Barak’s good looks—that just makes no sense. But I can imagine finding her reasonable, yet still not agreeing with her conclusion.

If that makes sense, then reasonable people can disagree. I suspect there are actually many different ways that reasonable people might be able to disagree, each raising its own set of complicated issues. But there—one case keeps the platitude in circulation. Maybe it’s something we shouldn’t abandon.

Saviours in suits?

We’re desperately trying to finish putting issue 40 of TPM together, but I thought I’d take five minutes out to share some of an interview with Simon Critchley that had to be cut. These bits are about how philosophy as a profession works. For example, he talks about how “there’s a degree of professionalisation in American academia” which you don’t get in the UK:

Generally in the States, the division between university life and the general culture is much broader than it is in the UK. […] The relationship between [academia] and the general cultural discussion is more or less non-existent. The fact, for example, that NYU might be the top rated philosophy department in the US has no effect in terms of the general culture.

This professionalisation also infects graduate studies:

Graduate school is really a training in how to become an academic in that discipline, and you’re meant to respect those professional protocols and replicate them as a graduate student. And everything depends upon where you went to graduate school, who your teachers were, who your advisors were, and that determines where you’re going to get a job.

Despite this, however, teaching is, apparently, freer than it is in Britain:

The sad fact is that British universities have been taken over by accountants and middle managers and swamped in bureaucracy. To teach a new course at a British university you’ve got to give a year’s warning and provide week by week breakdown of aims and objectives and reading lists for every week. Whereas I can just go in at the New School and say if I’d like to teach a seminar on this and provide half a page, and then turn up at the first class with a syllabus, more or less. It’s a little like how I remember teaching when I first started.

What interests me is whether this is really a good thing. Sure, if you’ve got an inspirational teacher, like Critchley no doubt is, this is probably great. But most teachers aren’t inspirational; many are really quite poor. Giving them the freedom to turn up and teach what they like seems like a recipe for disaster.
It seems to me that tight management is something almost all academics resent, but is it really the case that the men and women in suits ruin the educational experience, or can they in fact raise the bar?
(An analogy might be Prince, who freed himself from the record company suits, let his creativity free and, er, released album after album of mediocre material. When were all his classic albums made? When record company execs said “you can do that, but not that”.)

Edit: I’ve recanted the claim that most teachers are quite poor and changed it to the claim that many are. Got a bot carried away there: thought I was a columnist on a tabloid newspaper for a minute. I do look a bit like Richard Littlejohn, alas.

Truth or Consequences?

A type of dilemma comes up over and over again. On one side there’s the value of pursuing, stating, or implementing “truth.” But on the other side there are the dangers of doing so. Maybe you watched that great game show when you were a kid—truth or consequences. That’s the dilemma, in a nutshell.

A particularly painful version is playing our right now in the US Congress. A bill was proposed that would have condemned Turkey for mass atrocities committed against Armenians starting in 1915. The bill’s sponsors backed off when Turkey threatened dire consequences. Even the Jewish Anti-Defamation League has come out against the bill, because Israel has no other allies but Turkey in the Middle East.

Elie Wiesel makes it all seem so simple in the preface to Not on Our Watch, a book about today’s genocide in Darfur. He says “Remember: Silence always helps the killer, never his victims.” Sad to say, it’s just not true. As much as you want to see the Armenian genocide loudly condemned, speaking out about it could cost people their lives.

Should truth be pursued at all costs, or does it depend on the consequences? The same issue comes up in the wake of James Watson’s careless claims about race. Should we pursue the truth about whether there are differences of aptitude between the races? But what if the discovery of differences would add to the problems of the “inferior” race?

A third example—I should think the truth is that people with disabilities should not face discrimination in the workplace. That’s the idea behind the Americans with Disabilities Act which was passed in the early 90s. You can bet at the time dissenters argued that people would hire fewer people with disabilities if they thought they’d be hemmed in by regulations. It turns out the dissenters were right.

Should we close our eyes to consequences and commit ourselves to the truth? It can seem so if you think of consequences as a mere cost/benefit of doing what’s independently right or wrong. To a “consequentialist” that can’t make the slightest bit of sense. Rightness, for a consequentialist, is precisely a matter of the impact of our actions. And that means impact in the world as it really is, not impact in a world full of perfectly rational and fair people…who own up to past wrongs, reject racism, and accommodate disabilities.

Even if you’re not a true blue consequentialist, surely costs and benefits are part of what matters, morally. So I can’t see taking too “pure” an approach to any of these questions. They’re all extremely painful. It’s always sad, even tragic, to give less than full force to the truth, but it’s got to be wrong to entirely ignore consequences.

The Small Virtues

In the comments to Just Choose a few days ago there was some talk of the “small virtues.”  It’s tricky coming up with examples.  Punctuality seems like a small virtue, but then maybe it’s actually one of the less important expressions of a great virtue—respect.  I made it to a meeting on time recently and while we waited for the others, everyone agreed that’s what punctuality is all about.

What punctuality amounts to varies a lot depending on the context and culture.  According to an article in The Economist (so don’t blame me for the stereotype), “Punctuality is not a Latin American comparative advantage.”  But I take it everywhere there is some limit on how late you can be. (Right?)

Marital fidelity was mentioned in that earlier discussion as a small virtue.  Bill Clinton’s unfaithfulness was not a small matter to Hillary, I’m sure. “Small virtue…nonsense!” you can just hear her say.  But better that in a president than other vices that play out in a big way on the world stage.  Maybe it was cowardice that made Clinton stand by and do nothing during the slaughter in Rwanda.  The opposite—being rash—seems to be part of what got us into the mess we’re in in Iraq.

Sense of humor seems like a small virtue, if it’s a virtue at all.  Aristotle actually does list wit alongside “serious” virtues like courage and justice (which I’ve always found intriguing).  How about neatness as a small virtue?  And being a slob as a small vice?

Respect is a virtue that interests me a lot–especially the question of what it means to have it when you disagree strongly with another person.  But I’d say it’s a big virtue, so will save the topic for another day!

Watson, racism and expertise

I’m sure most of you have heard about the controversy surrounding James Watson. He made some comments in an interview about the alleged intellectual inferiority of black people and as a result, several talks he was supposed to give were cancelled, including one for the Bristol Festival of Ideas, which I am involved in.
I was a little annoyed that there was not enough debate about which parts of his comments were unacceptable and which were simply controversial. I wrote a post at Comment is Free about it, which you can read here. In summary, I pulled out five different claims and invited readers to decide which were racist and which were worthy of open debate. These are:

1) Average genetic differences between human populations result in different distributions of observable characteristics.
2) Genetic differences may extend to cognitive as well as purely physiological characteristics.
3) The scientific investigation as to whether such cognitive differences exist has found evidence that average IQ is not constant across the world.
4) Some ethnic groups are superior to others.
5) People who have to deal with black employees find they are not equal to whites.

One problem with writing this is that I’m not a scientist, and my grasp of the technicalities of genetics is limited: I can never quite remember exactly what an allele is and I don’t know the right way to talk about “populations”, for example. A few people took me to task for this.
But this just raised the whole question of who can contribute to a debate. It seems to me that in the general public arena, we need people offering perspectives who are not narrow experts, and so we should not pounce on them too viciously if they’re not exactly right in their science, unless they are misleadingly wrong. I don’t think I was – correct me if I’m wrong.
However, the problem of expertise quickly becomes recursive. (Is that the right word?) For how do I know whether or not the things I don’t know precisely are important or not? It could easily seem that only an expert can tell whether my lack of expertise is critical or not in the given context. So as a non-expert, I go beyond my expertise whenever I say something and I don’t have the expert knowledge to know whether my level of simplicity is too simplistic. But then it seems debate has to be left to the experts after all.


There are lots of visually ambiguous drawings, and maybe the most famous one is the duck-rabbit.  You can see it as a duck or as a rabbit.  What’s annoying me is the ‘or’ in that sentence.  Why do I have to see it as a duck or as a rabbit?  Why not both at once?  One answer is that our concepts get to work well upstream in our visual experience.  As Kant might put it, our visual experience is largely constituted by our conceptual store.  We don’t just see stuff, we see stuff as stuff.  I can add to my conceptual store, maybe tweak it with a bit of education or experience, but whatever is going on in my head is not entirely up to me.  There’s unconscious processing whirring away in there, and that leavs me with a duck or a rabbit, but not both at once.

I’m also a little exercised at the moment by cross-modal effects, instances in which it makes no sense to separate one sense modality from another.  You can get a feel for it yourself by experiencing The McGurk Effect firsthand right here.  What it seems to show is that speech perception is cross-modal:  sight and sound blend into one experience, leaving you with something more or other than the sum of the parts.  Just like the duck-rabbit, it’s outside of your conscious control.  Knowing it’s an illusion does not put paid to the illusion.

Hume argues that we have no good grounds for belief in external objects, but we have the belief no matter what.  We can’t seem to help it when it comes to inductive inference either.  So too, maybe, with our belief in a persisting and unchanging self.  Hume runs arguments against all these things, but he knows we can’t help it.  Nature, he says, has not left it up to us to believe in this stuff — we just do.  It can be a worrying thought for a philosopher.  What if the unconscious stuff whirring away in there does more than lump me with a duck or a rabbit, does more than blend up my modalities for me?  What if it just settles some philosophical questions for us, no matter where the arguments really lead?

Just Choose

The first work of philosophy I ever read is Jean Paul Sartre’s article “Existentialism is a Humanism.” That was a long time ago when I was a freshman in college, but I still like the article today. If you look at it carefully, you’ll find much to criticize, but the article is full of good ideas.

The main point of it is that we are responsible for our own choices. You can’t say you had to choose X because of….a book, a moral theory, a religious idea, an adviser, or even your feelings. You chose the book, the theory, the idea, the adviser, and you even chose how to interpret your own feelings.

What happens when you own up to your own responsibility? Sartre says you’ve got to see that you have a huge weight on your shoulders, because you choose for all, not just for yourself. Not literally, but “human nature” is something we’re all continually fashioning. It’s not “up there” in God’s intentions or “in here” in our genes but continually created through our choices. If you lie, cheat, and steal, that’s your contribution to what humankind amounts to. Are you sure you want that to be your contribution?

Once you own up to your responsibility, and admit the weight that’s on your shoulders, then what? Here’s where you might not be entirely satisfied with Sartre. He says you must “just choose.” But it seems as if that’s the point when you should actually think things through carefully, looking at the reasons for doing this or doing that. There might be better reasons for one option and worse reasons for the other.

I might admit my responsibility, and feel the weight on my shoulders, but “just choose” badly. Sartre has a famous example of a young man who’s choosing between staying with his ailing mother and joining the resistance. All he can do, says Sartre, is…choose, in full awareness of his responsibility. But what if he were feeling pulled between staying with his ailing cat and joining the resistance? Or between running off to get rich in America and staying with his ailing mother?

By focusing on a particularly difficult dilemma, Sartre makes it seem as if every choice is basically a toss up. Not so. Sometimes the best reasons are on the side of one option, not the other. Still Sartre has a point–it’s you who must sort out the reasons. We surely reason badly when we pretend that reasons fall out of the sky.

Pinker, orgasms and obfuscation

Every now and again someone says something which isn’t new, but it makes things so clear, other things begin to fall into place. I had this experience talking to Steven Pinker after his talk at the Bristol Festival of Ideas autumn season.
Pinker made the simple point that language works best when the experience it is describing is digital rather than analogue. (I may be getting him slightly wrong, but this is the gist.) There’s a good example in his new book, The Stuff of Thought. Although languages differ enormously in the words they use to describe what we would think of as borderline colours (e.g. orangey-red, turquoise) almost all users of all languages will identify the same sample as a paradigmatic example of a primary colour, and will have common words for these colours. The hypothesis is that the human mind universally does discriminate between primary colours clearly (unless you’re colour-blind of course), and languages all reflect this. But some other colour differences are not so clear to us and again, our lack of clear, easily communicable words for these colours reflects that fact. This is what is meant by the digital nature of language: it works best when it carves the world up into things that clearly are or are not x.
In contrast, it is notoriously hard to describe analogue experiences with words: those which may be very vivid, but are not easily contrasted with other experiences. Orgasms are a classic example here. The problem is that there is not anything we can really contrast the experience with, and so it is hard for language to get a grip on it. I remember a funny example of a parent trying to describe an orgasm to a child as something like a cross between a tickle and a sneeze. You can see what is meant, but not only does it fall way short of the mark, try doing better yourself.
This is when I had my epiphany. Does this not provide some kind of crude test for whether imprecision is inescapable or a case of obfuscation? Here’s what I mean. When someone is failing to offer the crystalline clarity sought by science and at least some branches of philosophy, we should not assume they are just talking nonsense. Rather, we should ask: is what they are trying to describe something the nature of which really is digital, or is it analogue? If it is the latter, that explains why they can’t be very clear. If the former, their lack of clarity is just a reflection of sloppiness of thought, or worse.
What could possibly be such analogue phenomenon? The nature of the divine? Aesthetic appreciation? Genuine moral dilemmas of a singular nature?

What I’m Not Reading

I don’t know if any of you have come across Marshal Zeringue’s various book blogs. They’re all rather marvellous in their different, quirky ways. His latest one is called Writers Read, in which he simply asks writers, what are you reading right now? Funnily enough, people are always reading various impressive sounding tomes. It reminds me of a thing the Guardian did at the Hay festival a few years’ back when they asked people what they were thinking of “right now”. Amazingly, hardly anyone was admiring the rear of the person in front of them, wondering about what to have for lunch, or internally singing Do You Know The Way To San Jose? Instead, we have answers that start:

“For a long time I’ve been fascinated with neurology and pyschoanalysis…” (Siri Hustvedt)
“Right now I’m thinking about the G8 summit…” (Ian Rankin)
“Why bother? Auschwitz. The gulags…” (Philippe Sands)

A few were refreshingly honest.

“I’ve just been given some Polish chocolate and I’m wondering what Polish chocolate is like…” (Kazuo Ishiguro)
“I’ve just seen Bill Deedes whizz by, and I was thinking how nippy he is on his sticks.” (Julian Clary, though he is a comedian)

I read things like that and think I’ll never be able to feign the gravitas that gets you taken seriously as an intellectwal.
Anyway, Marshal asked me what I was reading right now, and this is what I told him. Is my reading sufficiently high-brow, I wonder? (More on Pinker soon.)

The Little Red Hen

littleredhen.jpgI’m not sure it makes sense for kids to get all tied up in knots about the traditional problems of philosophy. I mean, do kids need to worry about whether they have free will? Whether they really know the world is “out there”? Whether morality is “absolute”?

But then, there are a lot of much less hair-raising questions that you can discuss with kids. They sometimes spring forth from children’s fiction. Here goes—some philosophy for kids.

You remember the Little Red Hen. She wanted to make some bread and she had a bunch of slacker friends, a dog, a cat, a pig.

“Who will help me pick the wheat?” she asked. “Not I,” said the dog. “Not I,” said the cat. “Not I,” said the pig.

“Then I’ll do it,” said the Little Red Hen. And she did.

Then she had to grind the wheat, and make the dough, and put it in the oven. The friends wouldn’t help her with anything.

When the bread was all done, she said “Who will help me eat the bread?” Now her friends started singing a different tune.

“I will,” said the dog. “I will,” said the cat. “I will,” said the pig.

In a shocking turnaround, the Little Red Hen said. “I picked the wheat, I ground the wheat, I made the dough, etc. Now I will eat the bread.” And she did.

Question: Did the Little Red Hen do the right thing? Open for comments from kids and kids-at-heart, three and up.