Monthly Archives: May 2011

Nothing

Titel von Martin Heideggers "Sein und Zei...

Image via Wikipedia

This past Saturday I saw a video of Mark Gungor’s “Laugh Your Way to a Better Marriage.” One thing that stood out was his discussion of the difference between the minds of men and women. According to Gungor, a man’s mind can be understood in terms of boxes: we have a box for each thing and each thing has its own box. In contrast, a woman’s mind is like a ball of wires-everything is interconnected and everything is linked to emotions. The highlight of this discussion was the nothing box. As Gungor sees it, each man has a special box in his mind that contains nothing. This box is supposed to be our favorite box and it explains how we men can do nothing and think nothing.

Naturally, Gungor is not the first comedian to note the special connection between men and nothing. Jerry Seinfeld famously had a show about nothing and numerous other comics have bits on the subject. Of course, this is not a blog about comedy, but a philosophy blog. Philosophy, as you know, is a lot like comedy, only less funny.

When it comes to nothing in philosophy, it is natural to think of Martin Heidegger and his work Being & Time as well as Sartre’s Being & Nothingness. Since I have no idea what Heidegger meant and I only understand Sartre while eating croissants, I will simply mention them and move along to the question of whether or not men can think or do nothing.

Like all men, I purport to be able to think nothing. To be specific, if a man is asked by his significant other “what are you thinking”, then the best bet is that he will respond by saying “nothing.” It is, of course, tempting to infer that a man says this because he is aware that saying what he was really thinking will result in a look of disgust, a slap, or both. However, men do claim to actually be thinking of nothing (at least at times). This raises the obvious question of whether or not this is even possible.

On one hand, it does seem possible. First, a man could be involved in a profound consideration of the Nothing and its various metaphysical and theological implications. However, the likelihood of this will vary from man to man (and typically hovers just over nothing). Second, Buddhism puts forth the notion that there is no self and there is no world. The ultimate goal is, of course, Nirvana. If the Buddhists are right, then there are no men to think about nothing, but at least there is nothing to think about. So, perhaps all men are actually Buddhists. Third, maybe men have the ability to actually have their minds literally think about nothing. To use an analogy, think of the mind as a blender and thinking as blending. Now, imagine running the blender with nothing in it. If this analogy holds, which it almost certainly does not, then perhaps the mind can think with nothing to think about.

On the other hand, it might seem to be impossible. First, as Hume noted, the mind always seems to have something going on-some perception or another. Hence, a man is never really thinking about nothing-there is always something in the blender. Second, it could be argued that unlike a blender, a mind cannot engage in its function without some content. Thinking might be more like cutting-while one can make a motion with scissors, they are not cutting unless they have something to cut.

In the case of doing nothing, a man could be doing nothing in the sense that a blender could be blending nothing. Of course, the obvious reply is that while the blender is blending nothing, it is not actually doing nothing. After all, by doing it is doing something. Even thinking about nothing would be doing something, namely thinking about nothing. As such, as long as a man is doing, then he would be doing something-at the very least he is doing. What he is doing, of course, might not amount to much-hence we could be forgiven if we exaggerate and say we are doing nothing.

Enhanced by Zemanta

A Quibble Over Godlessness

Kermitage.

Atheism and agnosticism. If you ask some people, atheism is just a sexed up version of agnosticism. After all, atheism is about what you believe (or don’t believe), and agnosticism is about what you know (or don’t) — so when we say that we’re atheists, we’re just putting accent on the fact that God is really really really super unlikely.  But others will say that atheism and agnosticism are perfect companions. They’ll tell you that agnosticism is just a closeted form of atheism. After all (they’ll say), since agnostics dislike being called ‘theists’, they must be atheists — the one position collapses into the other.

To see an example of this contrast in action, consider the views of Bertrand Russell and Anthony Grayling. Russell argued for atheism in public, and only called himself an agnostic among philosophers. That’s because he thinks there’s a significant gulf between atheism and agnosticism. By contrast, in a difficult-to-parse exchange with Jerry CoyneAnthony Grayling begged to differ — an agnostic is either an atheist, or just plain irrational.

Grayling does himself a disservice by repeatedly claiming that “an assimilation of proof concerning matters of fact to proof of the demonstrative kind”, when that doesn’t really seem to be exactly what’s going on. This post is going to be my attempt to make sense of what Grayling is up to, and an argument about why he hasn’t got it right.

~

stained_glass_link

The vicar of link.

This fight comes down to a complaint in the theory of knowledge. Grayling’s claim is that Russell tacitly bases the distinction between atheism and agnosticism on “a quibble about proof”.

Russell thought that you can’t disprove the existence of a corporeal object — like, say, a mountain made of gold — in the same way that you can disprove 2+2=6. You can prove that 2+2=4, alright; but you can’t prove, strictly speaking, that God (or the golden mountain) doesn’t exist. You can only say, “A golden mountain is pretty unlikely”. Moreover, it would seem that the two kinds of proof can be measured on a common scale — that of certitude. For Russell, following Hume, deduction and induction involve different degrees of warranted certainty. The idea here is that we ought not have as much confidence in inductive proof as we do with deductive proof. Logic and mathematics occupy a kind of heaven, an epistemic ideal; inductive proof, like most of our commonsense knowledge, will always be in perdition. As a result, while it’s tempting to believe that “Rain is wet”, I am really only warranted in believing that “rain is probably wet”. Likewise, you’re only warranted to believe that God is really really really not likely, though you might brand yourself as an atheist.

This is where Grayling and Russell part ways. Grayling believes that Russell is wrong to think that you are warranted in being any less certain about induction than deduction. In other words, Grayling thinks that it is just as provable that rain is wet as it is to say that 2+2 = 4. Instead of putting deduction in heaven, and induction in hell, the two modes of reasoning stand side-by-side. They just seem to be adhering to different standards.

What deduction and induction have in common is that both are capable of their own kind of proof. Grayling seems to think that proof is defined negatively for all kinds of discourse, in terms of what is irrational to reject. As he puts it, a thing is proven so long as we can adduce “evidence of the kind and in the quantity that makes it irrational, absurd, irresponsible or even a mark of insanity to reject the conclusion thus being supported.” So, any belief that is scientifically invalid (e.g., “next time I go out in the rain I won’t get wet”) is just as irrational as a belief that is, for all intents and purposes, impossible to demonstrate (e.g., “rain does not wet anything”, or “2+2=6”). The parenthetical quotes are cases of propositions that are disprovable, and hence (I think it is fair to say) certainly wrong.

This relates to agnosticism and atheism in the following way. Grayling thinks that the concept of “God” is as absurd, irrational, irresponsible, and possibly insane as the concept of “2+2=6” or “rain does not wet anything”. For Grayling, if a person says that they are agnostics about God, they might as well be saying that the jury is still out on whether or not two and two make six. Russell, and a great many following him, would prefer to say that while ‘2+2=6’ is demonstrably false, other howlers (like “there is a golden mountain”, or “rain does not wet anything”) are only probably false.

~

optimus-stained-glass

Optimus prime mover.

Or, at least, that’s how I’ve interpreted Grayling. He hasn’t made it easy. The problem is that when he accuses Russell of “an assimilation of proof concerning matters of fact to proof of the demonstrative kind”, the accusation can be made just as effectively against his own view. For while it may be true that Russell is measuring the two kinds of proof along a single spectrum of certainty, there is a sense in which Grayling is doing the very same thing, but in another way! After all, he’s assimilating them negatively: by saying that proof of any kind is the sort of conclusion that is “irrational, absurd, irresponsible or even a mark of insanity to reject”, once supported by sufficient evidence.

~

A more serious problem is that Grayling hasn’t got the commitments of atheism squared away. Grayling argues that “if you seriously mean that you think it might be conceivable or possible that there could be evidence for a deity, [then you are] agnostic, not atheist” (“possible” meaning, I presume, “rationally possible”). In this, he provides an implausible formulation of what it means to be “agnostic” and “atheist”. For an atheist might think that there is no God, or even that it can be proven that there’s no God, while still admitting that God is a rational possibility.

Recall, the only thing that an agnostic needs to say is that they don’t know whether or not God is real. Atheists, in the strongest sense of the term, claim to know that there is no God; in a weaker sense, atheists claim to believe that there is no God, and live their lives accordingly. Grayling is effectively saying, “there are no weak atheists. Go strong or go home.”

Unfortunately, contrary to Grayling’s claims, atheists can possess warranted doubts against their atheism, even if they think it is proven that there is no God. For one does not need to be convinced that God is rationally impossible in order to be an atheist, any more than one needs to think that Yeti are rationally impossible in order to think that they belong in fairy tales. That’s because, while it is indeed quite irrational, absurd, and irresponsible to believe that God exists, that doesn’t mean that it is rationally impossible to believe in God’s existence.

For you to believe that God’s existence is rationally impossible, or “inconceivable”, you must mean “inconceivable by everyone, everywhere, even at their best”. After all, “impossible” is a tough-guy word, a heavy-duty blunt instrument — when you say something’s impossible, you mean business. To say something is “impossible” is to say that it is “necessarily not”; and “necessary” implies universality; and ‘rationally necessary’ implies universality across the class of rational people. So when you say that belief in God is rationally impossible, you’re saying that no rational person can believe in God.

So what’s the difference? Well, since we’re talking about a crowd of rational people, that presumably means that we must assume that even the ideally rational, bold, imaginative, and informed person is in the crowd; and we have to have faith that the most rational person would agree that God has been proven to not exist. By contrast: to prove a thing — even in Grayling’s sense of “proof” — is not to suppose that you believe it as an ideally rational and informed agent, or even that you would retain the same beliefs if you were closer to the ideal. It just means that, according to some standards of discourse, denying a proposition is daft. Never mind whether or not the standards themselves require revision.

Here’s the punchline. If you are rationally compromised (to any substantial degree), then you might have proven something, alright; it’s just that the thing you’ve proven, could be working with suboptimal standards. For, as a matter of fact, while you might think that belief in God is irrational, etc., you might also think that you might be (to some substantial degree) irrational, irresponsible, or insane — that is, to think that you’re an ordinary Joe believer, slumming it with the rest of us. Long story short: “rational impossibility”, if it means anything like what it says on the tin, is an idealized standard that belongs to the epistemic angels, while Grayling’s sense of “proof” belongs to mortals.

So why does Grayling think that God is rationally impossible, or inconceivable? Evidently, he has a narrow view of the impossible. Grayling says that God and Yeti are not the same sort of thing, because at least people can think of the conditions under which the existence of a Yeti might be confirmed (e.g., as being a furry Wookie-like creature). By contrast (he says), it’s not even clear what research programme could be contrived to figure out where there is a God, because the concept of God is a catch-all wish-fulfiller.

I agree that belief in God is madcap all the way, because the idea of “God” in the mainstream Abrahamic faiths is a nebulous blob, a Rorsharch for the credulous. (To me, this is like determining what feats of strength and vigor are possible by visiting a leper farm. But anyway.) Suppose that Grayling’s programme is plausible. If it is, then the fact that it is both irrational to believe in God and irrational to believe in 2+2=6, ought to tell us that it is rationally impossible to think that there is even any evidence that they might be true claims.

And yet, and yet, and yet… ! — people still reject Grayling’s account. Because (Grayling says) people hold some lingering fidelity to Rome, one final chain around our ankles, and that we have yet to emancipate ourselves from it. But that implication seems pretty unlikely when thinking about avowedly Godless heathens and Grayling-dissenters like Jerry Coyne and Richard Dawkins. So there might be a better explanation of why people may be resistant to Grayling’s programme.

stained-glass-1

Darth Vatican.

First, “conceptual clarity” doesn’t matter in the way Grayling thinks it does. For my part, I agree that “God” is a bullshit concept. But that isn’t really very interesting or important when we’re trying to figure out whether or not the existence of God is rationally possible. Meanings can be clarified — and sometimes, even the relevant, supposedly deep ontological features of the concept can be reconceived — without improving the epistemic standing of the doctrine or the worldly practices of the conceivers. For instance, five years from now, the Roman Catholic Church might declare that God lives in a slum on one of the planets in the Hades Gamma Cluster, thereby turning God into a more exotic sort of Yeti. But this really wouldn’t make any difference to how skeptics think about the Catholic worldview. We’ll never see Hades Gamma, or develop any means of seeing whether he’s out there. The Hades Gamma version of the Catholic Church is just as irrational, madcap, and irresponsible as the Orthodox version.

Second, because doubt matters to scientific integrity. Both Coyne and Dawkins make a virtue out of retaining it, and they seem to do so for the sake of the institution of science. A measure of doubt is always sublime.

(Images courtesy of holytaco . com.)

Enhanced by Zemanta

Goodbye (sort of) from me

Long-term readers with good memories may remember me. I co-founded tpm back in 1997 with Jeremy Stangroom and for a long time posted quite frequently at this blog. (111 posts, I see from the right-hand box.) It’s been about nine months since my last and I thought I’d explain some of the changes that lie behind this silence.

Last year, I handed over the editorship of tpm to the wonderful James Garvey, who has taken to the role brilliantly. After 13 years in the editor’s chair (housed first in my London bedsit and latterly in my spare bedroom) I thought a change was overdue. tpm has thrived by evolving and keeping itself fresh and I felt I had more or less exhausted my capacities of refreshment. Plus I needed refreshing too.

I am still involved at tpm as editor-in-chief, but it’s never a good idea for the old guard to hover too close behind the shoulders of the new, so I am retreating quite far into the background. I also have plenty of other projects to get on with. (If anyone is interested in these, I post news of what I’m up to at my website and I’m also on twitter.) Most recently, I have replaced my Philosophy Monthly podcast with a new series called microphilosophy (iTunes users can subscribe here). Of most personal importance is my new book, The Ego Trick. It’s the book I’ve most wanted to write and if you only ever read one of my books, I’d love it if it were this one.

I still write for tpm and will probably post here again from time to time, so this is goodbye rather than farewell. Editing tpm has been a fantastic experience, made possible by the vital and extensive web presence Jeremy has built up. (tpm has always been a project whose web side, led by Jeremy, is at least as important as the print.) I’m very grateful for all the appreciation readers have shown over the years. But all good things (well, almost all) have a shelf-life and I want to go while I’m merely stale, rather than mouldy.

New podcast series

tpm’s editor-in-chief Julian Baggini has started a new podcast series, microphilosophy, which replaces his popular Philosophy Monthly. Each edition will be an interview, talk, discussion or feature, no longer than half an hour but usually much shorter. This first is an interview with the philosopher and theologian Richard Swinburne, conducted for Julian’s new book, The Ego Trick. More podcasts relating to the book will follow over coming weeks. You can download or listen to the podcast here and at iTunes.

Pyschopaths & Ethical Egoists

Author Ayn Rand

Image via Wikipedia

There seem to be some interesting similarities between psychopaths and ethical egoists.

Based on the stock account, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self control.  In terms of specific deficiencies, psychopaths are said to lack in shame, guilt, remorse and empathy. Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as  predators that prey on  their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.”

Interestingly enough, these qualities also seem to describe the ethical egoist. Ethical egoism is an ethical theory that individuals ought to maximize their own self-interest. This is generally contrasted with altruism, the view that people should (at least some of the time) take into account the interests of others.

Ethical egoism can also be cast in more general terms as a form of consequentialism. On this sort of view, people should maximize what is of value (V) for the morally relevant beings (MRB). The sort of utilitarianism endorsed by Mill is a form of consequentialism. However, Mill is clearly not an ethical egoist since he considers all humans (and sentient beings) as morally relevant beings. In the case of the ethical egoist, the scope of morality (who counts as a MRB) extends only to the individual. For example, if I were an ethical egoist, then the MRB would be me (and me alone). If you were an ethical egoist, then your MRB would be you (and you alone). As far as values goes, V could be almost anything. However, it tends to be things like self-interest, pleasure and happiness. Famous ethical egoists include Glaucon (as laid out in his Ring of Gyges tale), Ayn Rand, and Thomas Hobbes.

While this oversimplifies things a bit, those who accept ethical egoism generally claim that people are naturally inclined toward desiring “undue gain” and are not naturally inclined towards sympathy or goodwill towards others. Hobbes makes it rather clear that people are lacking in sympathy and are motivated only by the hope of gain and glory. In many ways, this view seems to cast humans as naturally exhibiting some of the key traits of psychopaths. It is no wonder, then, that Hobbes argues that people do not form society out of mutual good will or on the basis of being social beings. Rather, people form society out of selfishness and it can only be maintained by the power of the sovereign.

However, what defines the theory is not the description of humans but rather the prescriptive element. Proponents of ethical egoism endorse the claim that each person should act so as to maximize value for himself. Rand goes as far as to cast selfishness as a virtue and altruism as the height of foolishness. In a way, it could be seen that Rand is advocating that people act like psychopaths.

Of course, there are important distinction between being a psychopath and being an ethical egoist. One is that psychopaths are supposed to behave in ways that are impulsive and irresponsible. This might be because they are also characterized as failing to properly grasp the potential consequences of their actions. This seems to be a  general defect in that it applies to the consequences for others as well as for themselves This reduced ability to properly assess the risks of being doubted, caught, or punished no doubt has a significant impact on their behavior (and their chances of being exposed).

If Glaucon’s unjust man is taken as a role model for ethical egoism, the ethical egoist is supposed to strive to be the opposite of the pyschopath in this regard. The successful unjust man is supposed to grasp the consequences of what he does and hence acts in ways that are calculated to conceal his true nature. The unjust man is also supposed to have the impulse control needed to act in ways that make him appear to be just. It is tempting to conclude that an ethical egoist is essential a psychopath would good impulse control and a grasp of consequences. Or, put another way, that a psychopath is an ethical egoist who is not very skilled at being an ethical egoist.

Interestingly, when Socrates gives his rebuttal to Glaucon, he argues that the unjust man actually does not grasp the true consequences of his actions. That is, the unjust man does not realize that he will corrupt his soul in the process of being unjust. If so, perhaps the ethical egoist is a psychopath with an ethical theory.

Speaking of selfishness, I’m plugging my new book 30 More Fallacies.

Enhanced by Zemanta

Death and Its Concept

Philosophers and non-philosophers stand on a level of equality with respect to death. There are no experts on death, for there is nothing to know about it. Not even those who study the death process have an edge on the rest of us. We are all equals in thinking about death, and we all begin and end thinking about it from a position of ignorance.

Death and its concept are absolutely empty. No picture comes to mind. The concept of death has a use for the living, while death itself has no use for anything. All we can say about death is that it is either real or it is not real. If it is real, then the end of one’s life is a simple termination. If it is not real, then the end of one’s embodied life is not true death, but a portal to another life.

Having no content, we must speak of death metaphorically. For those who think death is real, death is a blank wall. For those who think it is not real, death is a door to another life. Whether we think of death as a wall or a door, we cannot avoid using one metaphor or another. We often say that a person who dies is relieved of suffering. However, if death is real, then it is metaphorical even to say that the dead do not suffer, as though something of them remains not to suffer. As there are already many speculations about some sort of ‘next life,’ I will focus on the view that death is real and marks the final end of an individual’s life

Let us explore the metaphor that death is a wall a bit further. Each of us is born facing this wall. From that moment on, every step we take is towards it, no matter which way we turn. There is simply no other direction to take. Like a fun house mirror, the wall of death show us our living fears and distorted images of ourselves. All we see when we look at death is a reflection of our own lives.

Death has no subjective meaning at all. It will come to other people, but never to me. Of course, I know that I am going to die. Death means the end of my future. However, as long as I am alive, I will be living toward that future possibility of no longer having possibilities.

The unavoidable conclusion is that, if death is real, neither I nor you will ever personally taste death. I will cease to be conscious before the end. No matter how close I come to it, death recedes before me. I am actually dead only for others. When the end actually arrives, my dead body passes into the hands of the coroner. I will no longer be there. Death is always described from the perspective of the living. As Ludwig Wittgenstein famously put it, “Death is not an experience in life.”

The concept of death is unlike most other concepts. Usually we have an object and the concept of that object. For example, we have a horse and the concept of a horse. However, the concept of death is absolutely without any object whatsoever. Thinking about the prospect of one’s own death is a constant meditation upon our own ignorance. There is no method for getting to know death better, because death cannot be known at all.

One trouble with discussing this topic is the instinctive fear of death. We tend to avoid death in our thoughts and actions. However, if we could forget our fears for a minute, we could see more clearly how interesting the concept actually is from a more detached point of view.

Birth and death are the bookends of our lives. Living towards death in time gives one’s life a direction and framework within which to understand the changes that life brings. The world looks very differently to the young and the old. The young look forward. The old look back. What matters to us changes as we get older. The prospect of death informs these changes. The young have an intellectual understanding that death comes to us all, but their mortality has not become real to them. For the old, mortality starts to sink in.

For a long time, I have been puzzled by two famous philosophical ideas about death, one from Plato and one from Spinoza. The first is that a philosopher has a vital concern with death and constantly meditates upon it. The second is that the wise person thinks of nothing so little as death. Perhaps the truth is somewhere in the middle. Ignoring death leaves us with a false sense of life’s permanence and perhaps encourages us to lose ourselves in the minutiae of daily of life. Obsessive rumination on death, on the other hand, can lead us away from life. Honestly coming to terms with one’s death involves reflection on its significance in one’s life, and thinking about the larger values that give life its meaning. In the end, it is useful to think about death only to the point that it frees us to live fully immersed in the life we have yet to live.

Of Psychopaths & Replicants

Character Rick Deckard has a hard time resisti...

Image via Wikipedia

Seeing Jon Ronson’s interview on The Daily Show got me thinking about psychopaths. I did not buy his book on psychopaths, so I will not comment on it. Rather, I’ll say a bit about spotting psychopaths from a philosophical perspective.

First, a bit about psychopaths. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self control.

In terms of specific qualities  psychopaths lack, these include shame, guilt, remorse and empathy. These qualities tend to lead  psychopaths to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and express contempt for others.

Psychopaths are supposed to behave in ways that are impulsive and irresponsible. This might be because they are taken to fail to properly grasp the potential consequences of their actions. This seems to be a  general defect in that it applies to the consequences for others as well as for themselves This reduced ability to properly assess the risks of being doubted, caught, or punished no doubt has a significant impact on their behavior (and their chances of being exposed).

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as  predators that prey on  their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.”

Given these behavior traits, it might be wondered how psychopaths are able to avoid detection long enough to actually engage in such behavior. After all, people tend to be on guard against such treatment.The answer is easy enough. First, psychopaths often seem charming. Since they seem to tend to lack a commitment to truth, they are willing and able to say whatever they believe will achieve their goals. Second, they are often adept at using intimidation and manipulation to get what they want. Third, they are often skilled mimics and are able to pass themselves off as normal people.

It is estimated that 1% of the general population is made up of psychopaths. The prison populations are supposed to contain a larger percentage (which would hardly be surprising) and the corporate world is supposed to have an above normal percentage of psychopaths. However, these numbers are not solidly established.

One obvious problem facing anyone attempting to determine the number of psychopaths is that they will tend to do their best to hide their true nature. After all, the intelligent psychopaths will generally get that they are not like other people and that normal people will tend to react negatively to them. The same holds true in attempts to determine whether or not a specific person is a psychopath or not. In many ways, the psychopath is like Glaucon’s unjust man in the Ring of Gyges story: he is a person who wants to do what he wants without regard to others, but needs to avoid being recognized for what he is.

As noted above, psychopaths are characterized as possessing traits that would tend to result in their exposure. As noted above, psychopaths are characterized by having poor impulse control, having difficulty with behaving responsibly, and a poor capacity for assessing consequences. Their deficiency in regard to empathy also probably  makes it more difficult for them to blend in properly.These could be called “exposure traits” in that they tend to expose the psychopath to others.

One rather interesting point to consider is whether or not these exposure traits are actually traits that are essential components of being a psychopath. After all, they might merely be traits possessed by the psychopaths that have been exposed. To advance this discussion, I will head into the territory of science fiction.

In science fiction, one interesting problem is the thing problem. This problem gets its name from Carpenter’s classic horror film The Thing (which is based on “Who Goes There?” by John W. Campbell). The thing is an inimical alien that can almost flawlessly imitate any living thing it has consumed. In the case of the movie, the humans had to sort out who goes there: a human or a thing. In the case of psychopaths, the challenge is to distinguish between normal humans and psychopaths. In the movie, a test is devised: each part of a thing is its own creature and will try to survive, even if that means exposure of another thing. So, sticking a hot wire into a blood (or thing juice) sample will reveal whether the person is human or thing: if the “blood” squeals and tries to escape, the donor is a thing.

This test will, of course, expose any thing. Or, more accurately, expose any  thing that acts as expected. If a thing was, contrary to the way things are supposed to be,  able to suppress the survival response of one of its parts, it would pass the test and remain undetected. As such, any exposed thing would be a thing that could not do this, and this would lead the humans to believe that things cannot do this. At least until the things that could do this  finished them off.

If you prefer machines or replicants to things, this situation can also be presented in terms borrowed from Phillip K. Dick’s works. In Blade Runner (based on Do Androids Dream of Electric Sheep?) there are replicants that can easily pass for humans, with one exception: they cannot pass the Voight-Kampff Test because they do not have the time to develop the responses of a normal human. The similarity of the Hare checklist is obvious. Of course, the test only works on replicants that cannot mimic humans enough to pass the test. A replicant that could give the right responses would, of course,  pass as human.

Dick’s short story “Second Variety” also presented human-like machines, the claws. These machines were made for a world war and eventually broke free of human control, developing machines that could pass as humans (as our smart phones will do  someday). Unlike the replicants, the claws were always intent on killing humans-thus necessitating a means to tell them apart.  The early models were easily recognized as being non-humans. Unfortunately for the humans in the story, the only way they could tell the most advanced models  from humans was by seeing multiple claws of the same variety together. Otherwise, they easily passed as humans right up until the point they started killing.

It seems worth considering that the same might apply to psychopaths. To be specific, normal people can catch the psychopaths that are poor mimics, have poor impulse control, have difficulty with behaving responsibly, and  possess a poor capacity for assessing consequences. However, the psychopaths that are better mimics, have better impulse control, can seem to act responsibly, and can assess consequences would be far more difficult to spot. Such psychopaths could easily pass as normal humans, much like Glaucon’s unjust man is able to conceal his true nature.  As such, perhaps the experts think that these specific traits are part of what it is to be a psychopath because these traits are possessed by the psychopaths they have caught. However, as with the more advanced claws, perhaps the most dangerous psychopaths are eluding detection. At least until it is too late.

Enhanced by Zemanta

Changing the world

marx6On Friday, I attended the last seminar in a series hosted by the HEA called “Philosophy and Public Policy:  Making an Impact”.  The seminars were genuinely interesting.

There was a good talk by Baroness Onora O’Neill called “Interpreting the World, Changing the World” – a reference to Marx’s Theses on Feuerbach, and his famous line, “philosophers have only interpreted the world, the point is to change it”.  With worries about funding in the humanities depending on the impact of research, at least some philosophers are trying to find ways to change the world, or anyway make a noticeable dent in it.  A good thought emerged in the discussion:  both Marx and those who call for impact presuppose that there’s nothing worth preserving or maintaining in the world that we’ve got. 

G E Moore to one side, philosophers don’t score points for writing papers in agreement with what everyone else says.  They’re not invited to give talks which argue that everyone’s more or less got the right idea about, say, the mind-body relation.  Philosophers are supposed to be independent, have new ideas, question the assumptions of others, find reasons to take issue with the status quo.  I wonder how that might skew the pursuit of wisdom when, admittedly only very occasionally, our thoughts in some domain  are genuinely unobjectionable.

Dear Larry

In 2005, I was a philosophy graduate student across the Charles River from MIT where the then president of Harvard, Lawrence Summers, made his infamous remarks questioning the intellectual capabilities of women. He was giving a speech at a conference where he offered that a possible reason for the low number of women in high academic positions in science and engineering was due to an innate limitation. He said this was a more likely explanation for women’s underperformance than discrimination. I remember sinking lower in my chair as I heard the news. Summers’ remarks, though they were geared explicitly to other fields, nonetheless threw into doubt women’s capacity for deep thought in general.

In my field, I found that women were just as intelligent as the brightest men, the only difference was that there were far fewer of them. If Summers was questioning the reasons for the underrepresentation of women, then it would follow that he should suspect the same for people of other ethnicities. As Dr. Mary Waters, Chair of Sociology at Harvard said after his remarks: “Has anyone asked if he thinks this about African-Americans, because they are underrepresented at this university? Are Hispanics inferior? Are Asians superior?” Yet, this is something he didn’t publicly do, and if he had, he certainty would have enraged the public far more than he did with his comments about women’s inferiority. But I will leave that point to the side.

To be fair, in a way Summers’ claim that discrimination cannot be the cause for women’s comparatively inferior performance makes sense. We live in a society in which job discrimination is illegal and where there are quotas for hiring women. Furthermore, it is no longer socially acceptable, the way it was a generation or two ago, to discourage women from pursuing careers outside of the home. It could be argued that to all appearances, women no longer have any external barriers preventing them from success. Thus, persons such as Summers conclude that the barrier must be internal. I am sure that Summers, a person very much in the public eye, would not see himself as personally standing in the way of women’s performance. Indeed, he likely took his comments to be mere commentary on the facts of the situation—a neutral discussion of a phenomenon. Yet this is why it was so insidious.

Enter Cordelia Fine’s Delusions of Gender: How our Minds, Society, and Neurosexism Create Difference (2010). In this book, Fine argues that sex difference on the level of intellectual capacity is bogus. She analyzes the studies often cited by defenders of women’s inferiority, and shows how they have subsequently been proven fallacious by other scientists or that others have drawn the wrong conclusions from them. One of the most famous experiments that still captivates many, despite having been invalidated by subsequent studies, is the case which claims that there are cognitive differences between male and female babies of only a few days old: female babies are drawn to faces whereas male babies prefer mechanisms. The truth is that the science in favor of women’s intellectual strength doesn’t get much press. But this is only part of the problem. Even if there is no innate intellectual difference, why are women underrepresented?

Fine says that despite society’s attempts to eradicate explicit sexual discrimination, there is a more subtle type of discrimination at work. Recent studies in psychology bear this out. Fine highlights studies that reveal how we are deeply affected subconsciously by the expectations that our environment puts on us. In one study, a group of women were informed prior to taking an exam in mathematics that women tend to underperform compared to men, and another group of women were not. The group that was exposed to the stereotype threat performed far worse than the other group. Fine quotes Gregory Walton and Steven Spencer, two professors at Stanford University who argue that women’s performance is affected by stereotype threats like “the time of a track star running into a stiff headwind.” The reason that women perform less well than men on intellectual tests is a consequence of a tacit signal they are receiving from those around them that they are not good enough.

Of course, men are also affected by stereotype threats. Fine discusses studies that challenge the claim that men are “more aggressive” or “less empathetic” than women. One study, for example, reveals that men who are primed before a questionnaire that scientific studies show they are very empathetic creatures answer positively towards their abilities to nurture, unlike a group of men who were not primed.

What Fine suggests is that—whether we like it or not—the expectations our society has for us affects our self-perceptions, and thus our performance. In the case of intellectual pursuits, women still underperform men because they are getting the message that they do not belong there. Thus it is not enough to simply offer women equal opportunity. It is necessary for those in positions of influence to identify and eradicate the stereotype threats that they unwittingly promote. It is only then that we can actually achieve the ideals that liberalism promised so many of us centuries ago.

Pictures of the Dead

Princess Diana on a royal visit for the offici...

Image via Wikipedia

In addition to the fact that the both died violent deaths, Bin Laden and Princess Diana both share the fact that their post death pictures have generated controversy. In the case of Bin Laden, the decision was made to not release the photos of his corpse. In the case of Princess Diana, the infamous paparazzi photo of her death is featured in the upcoming film Unlawful Killing.

As far as the legality of the matter, this is easy enough to settle. Obama certainly has the legal right to not release the photos of Bin Laden. Legal steps can be taken to have the photos released, of course. In the case of the photo of Princess Diana, it is perfectly legal for it to be shown, at least in the United States and France. The UK is less enamored of the freedom of the press and the film will, as of this writing, not be shown there. What is more interesting than the legality is the matter of ethics.

The main argument given against the release of the Bin Laden photo is that it would incite people to violence. From a moral standpoint, this can be seen as a utilitarian argument (or simply as a pragmatic argument): releasing the photo would have harmful consequences, therefore it should not be done.

Given the power of images, this does have a certain appeal. An image of the dead Bin Laden would certainly have more emotional impact than the mere statement that he is dead. However, it also seems reasonable to consider the obvious: if killing Bin Laden would not inspire a person to violence, then seeing a photo most likely would not push the person over the edge. As such, this argument is not particularly strong. Perhaps a better reason can be found by considering the death photo of Princess Diana.

One argument that can be used in the case of Princess Diana is that such a photo should not be shown out of respect for her and her family. On the face of it, it seems reasonable to hold that a graphic death photo should not be shown unless there is a compelling reason to show the photo. As such, the burden of proof would be on those who contend such a photo should be shown.

In some cases compelling reasons can be given. For example, the photo of Princess Diana was shown (with her face blurred) during the investigation of the crash that killed her. This sort of use seems to be legitimate. Another example would be when showing a picture serves a laudable purpose, such as revealing the true horror of war or crime. However, to show such an image merely to amuse, shock, or make money would seem to be morally unjustified. This is not to say that such a showing should be prohibited by law. Rather, it is to say that it should not be done. If this line of reasoning is solid, then the film should not include the picture-unless it can be shown that there is compelling reason to include the image.

Turning back to Bin Laden, it is rather tempting to hold the view that he is not worthy of such respect. After all, he planned the deaths of thousands and showed no concern over the harm he did to them or their families. As such, there would be no compelling reason to not show his image so as to protect his dignity. In fact, it could be argued that if showing the photo would somehow harm him or those who care about him, then this would be a reason to show them.

That said, I do believe that an appeal to dignity can be made against showing the picture of Bin Laden and the picture of Princess Diana.

In the case of Bin Laden, showing the graphic photo would not be an unjust affront to his dignity. However, it would be an undignified act on our part. While it is tempting to take a trophy from a fallen foe and parade it about in bloody splendor, we should be better than that and it should be beneath our dignity as a people.  Just as we pride ourselves on not wantonly slaughtering innocents, we should also pride ourselves on not showing graphic images of  Bin Laden. In short, if we claim we are better than our enemies, we need to actually act better than they do and this includes not treating the dead as trophies.

In the case of Princess Diana, showing the graphic photo of her would seem to be a clear demonstration of the lack of dignity of those who elected to show the photo (presumably to attract attention to the film). As such, this is something that they should not do-after all, they should be better people. Naturally, if it can be shown that the image is being used in a way that is compelling (because it is critical to the aesthetic value of the film, for example) then it could be used in a way consistent with dignity.

Enhanced by Zemanta