Category Archives: Psychology

Defining Our Gods

The theologian Alvin Plantinga was interviewed for The Stone this weekend, making the claim that Atheism is Irrational. His conclusion, however, seems to allow that agnosticism is pretty reasonable, and his thought process is based mostly on the absurdity of the universe and the hope that some kind of God will provide an explanation for whatever we cannot make sense of. These attitudes seem to me to require that we clarify a few things.

There are a variety of different intended meanings behind the word “atheist” as well as the word “God”. I generally make the point that I am atheistic when it comes to personal or specific gods like Zeus, Jehovah, Jesus, Odin, Allah, and so on, but agnostic if we’re talking about deism, that is, when it comes to an unnamed, unknowable, impersonal, original or universal intelligence or source of some kind. If this second force or being were to be referred to as “god” or even spoken of through more specific stories in an attempt to poetically understand some greater meaning, I would have no trouble calling myself agnostic as Plantinga suggests. But if the stories or expectations for afterlife or instructions for communications are meant to be considered as concrete as everyday reality, then I simply think they are as unlikely as Bigfoot or a faked moon landing – in other words, I am atheistic.

There are atheists who like to point out that atheism is ultimately a lack of belief, and therefore as long as you don’t have belief, you are atheistic – basically, those who have traditionally been called agnostics are just as much atheists. The purpose of this seems to be to expand the group of people who will identify more strongly as non-believers, and to avoid nuance – or what might be seen as hesitation – in self-description.

However, this allows for confusion and unnecessary disagreement at times. I think in fact that there are a fair number of people who are atheistic when it comes to very literal gods, like the one Ken Ham was espousing in his debate with Bill Nye. Some people believe, as Ken Ham does, that without a literal creation, the whole idea of God doesn’t make sense, and so believe in creationism because they believe in God. Some share this starting point, but are convinced by science and conclude there is no god. But others reject the premise and don’t connect their religious positions with their understandings of science. It’s a popular jab among atheists that “everyone is atheistic when it comes to someone else’s gods”, but it’s also a useful description of reality. We do all choose to not believe certain things, even if we would not claim absolute certainty.

Plenty of us would concede that only math or closed systems can be certain, so it’s technically possible that any conspiracy theory or mythology at issue is actually true – but still in general it can be considered reasonable not to believe conspiracy theories or mythologies. And if one includes mainstream religious mythologies with the smaller, less popular, less currently practiced ones, being atheistic about Jesus (as a literal, supernatural persona) is not that surprising from standard philosophical perspectives. The key here is that the stories are being looked at from a materialistic point of view – as Hegel pointed out, once spirituality is asked to compete in an empirical domain, it has no chance. It came about to provide insight, meaning, love and hope – not facts, proof, and evidence.

The more deeply debatable issue would be a broadly construed and non-specific deistic entity responsible for life, intelligence or being. An argument can be made that a force of this kind provides a kind of unity to existence that helps to make sense of it. It does seem rather absurd that the universe simply happened, although I am somewhat inclined to the notion that the universe is just absurd. On the other hand, perhaps there is a greater order that is not always evident. I would happily use the word agnostic to describe my opinion about this, and the philosophical discussion regarding whether there is an originating source or natural intelligence to being seems a useful one. However, it should not be considered to be relevant to one’s opinion about supernatural personas who talk to earthlings and interfere in their lives.

There are people who identify as believers who really could be categorized as atheistic in the same way I am about the literal versions of their gods. They understand the stories of their religions as pathways to a closer understanding of a great unspecified deity, but take them no more literally than Platonists take the story of the Cave, which is to say, the stories are meant to be meaningful and the concrete fact-based aspect is basically irrelevant. It’s not a question of history or science: it’s metaphysics. Let’s not pretend any of us know the answer to this one.

Losing your illusions

Analytic philosophy has been enormously influential in part because it has been an enormous philosophical success. Consider the following example. Suppose it were argued that God must exist, because we can meaningfully refer to Him, and reference can only work so long as a person refers to something real. Once upon a time, something like that argument struck people as a pretty powerful argument. But today, the analytic philosopher may answer: “We have been misled by our language. When we speak of God, we are merely asserting that some thing fits a certain description, and not actually referring to anything.” That is the upshot of Russell’s theory of descriptions, and it did its part in helping to disarm a potent metaphysical illusion.

Sometimes progress in philosophy occurs in something like this way. Questions are not resolutely answered, once and for all — instead, sometimes an answer is proposed which is sufficiently motivating that good-faith informed parties stop asking the incipient question. Consider, for instance, the old paradox, “If a tree falls in the forest, and no-one is around, does it make a sound?” If you make a distinction between primary and secondary qualities, then the answer is plainly “No”: for while sounds are observer-dependent facts, the vibration of molecules would happen whether or not anyone was present. If you rephrase the question in terms of the primary qualities (“If a tree falls in the forest, and no-one is around, do air molecules vibrate?”), then the answer is an obvious “yes”. A new distinction has helped us to resolve an old problem. It is a dead (falsidical) paradox: something that seems internally inconsistent, but which just turns into a flat-out absurdity when put under close scrutiny.

Interesting as those examples are, it is also possible that linguistic analysis can help us resolve perceptual illusions. Consider the image below (the Muller Lyer illusion, taken from the Institut Nicod‘s great Cognition and Culture lab). Now answer: “Which line is longer?”

mullerlyer-illusia

Fig. 1. Which line is longer?

Most participants will agree that the top line appears longer than the bottom one, despite the fact that they are ostensibly the same length. It is an illusion.

Illusions are supposed to be irresolvable conflicts between how things seem to you. For example, a mirage is an illusion, because if you stand in one place, then no matter how you present the stimuli to yourself, it will look as though a cloudy water puddle is hovering there somewhere in the distance. The mirage will persist regardless of how you examine it or think about it. There is no linguistic-mental switch you can flip inside your brain to make the mirage go away. Analytic philosophers can’t help you with that. (Similarly, I hold out no hopes that an analytic philosopher’s armchair musings will help to figure out the direction of spin for this restless ballerina.)

However, as a matter of linguistic analysis, it is not unambiguously true that the lines are the same length in the Muller-Lyer illusion. Oftentimes, the concept of a “line” is not operationally defined. Is a line just whatever sits horizontally? Or is a line whatever is distinctively horizontal (i.e., whatever is horizontal, such that it is segmented away from the arrowhead on each end)? Let’s call the former a “whole line”, and the latter a “line segment”. Of the two construals, it seems to me that it is best to interpret a line as meaning “the whole line”, because that is just the simplest reading (i.e., it doesn’t rely on arbitrary judgments about “what counts as distinctive”). But at the end of the day, both of those interpretations are plausible readings of the meaning of ‘line’, but we’re not told which definition we ought to be looking for.

I don’t know about you, but when I concentrate on framing the question in terms of whole lines, the perceptual illusion outright disappears. When asked, “Is one horizontal-line longer than the other?”, my eyes focus on the white space between the horizontal lines, and my mind frames the two lines as a vibrant ‘equals sign’ that happens to be bookended by some arrowheads in my peripheral vision. So the answer to the question is a clear “No”. By contrast, when asked, “Is one line-segment longer than the other?”, my eyes focus on the points at the intersection of each arrowhead, and compare them. And the answer is a modest “Yes, they seem to be different lengths” — which is consistent with the illusion as it has been commonly represented.

Now for the interesting part.

Out of curiosity, I measured both lines according to both definitions (as whole lines and as line segments). In the picture below, the innermost vertical blue guidelines map onto the ends of the line segments, while the outermost vertical blue guidelines map onto the edges of the bottom line:

Screen Shot 2013-04-28 at 6.12.15 PM

Fig 2. Line segments identical, whole lines different.

Once I did this, I came up with a disturbing realization: the whole lines in the picture I took from the Institut Nicod really are different lengths! As you can see, the very tips of the bottom whole line fail to align with the inner corner of the top arrow.

As a matter of fact, the bottom whole line is longer than the top whole line. This is bizarre, since the take-home message of the illusion is usually supposed to be that the lines are equal in length. But even when I was concentrating on the whole lines (looking at the white space between them, manifesting an image of the equals sign), I didn’t detect that the bottom line was longer, and probably would not have even noticed it had it not been for the fact that I had drawn vertical blue guidelines in (Fig.2). Still, when people bring up the Muller Lyer illusion, this is not the kind of illusion that they have in mind.

(As an aside: this is not just a problem with the image chosen from Institut Nicod. Many iterations of the illusion face the same or similar infelicities. For example, in the three bottom arrows image on this Wikipedia image, you will see that a vertical dotted guideline is drawn which compares whole lines to line segments. This can be demonstrated by looking at the blue guidelines I superimposed on the image here.)

Can the illusion be redrawn, such that it avoids the linguistic confusion? Maybe. At the moment, though, I’m not entirely sure. Here is an unsatisfying reconstruction of the Nicod image, where both line segment and whole line are of identical length for both the top arrow and the bottom one:

mullerlyer-illusia2

Fig 3. Now the two lines are truly equal (both as whole lines and as segments).

Unfortunately, when it comes to Fig. 3., I find that I’m no longer able to confidently state that one line looks longer than the other. At least at the moment, the illusion has disappeared.

Part of the problem may be that I had to thicken the arrowheads of the topmost line in order to keep them equal, both as segments and as wholes. Unfortunately, the line thickening may have muddied the illusion. Another part of the problem is that, at this point, I’ve stared at Muller-Lyer illusions for so long today that I am starting to question my own objectivity in being able to judge lines properly.

[Edit 4/30: Suppose that other people are like me, and do not detect any illusion in (Fig. 3). One might naturally wonder why that might be.

Of course, there are scientific explanations of the phenomenon that don't rely on anything quite like analytic philosophy. (e.g., you might reasonably think that the difference is that our eyes are primed to see in three dimensions, and that since the thicker arrows appear to be closer to the eye than the thin ones, it disposes the mind to interpret the top line as visually equal to the bottom one. No linguistic analysis there.) But another possibility is that our vision of the line segment is perceptually contaminated by our vision of the whole line, owing to the gestalt properties of visual perception. This idea, or something like it, already exists in the literature in the form of assimilation theory. If so, then we observers really do profit from making an analytic distinction between whole lines and line segments in order to help diagnose the causal mechanisms responsible for this particular illusion -- albeit, not to make it disappear.

Anyway. If this were a perfect post, I would conclude by saying that linguistic analysis can help us shed light on at least some perceptual illusions, and not just dismantle paradoxes. Mind you, at the moment, I don't know if this conclusion is actually true. (It does not bode well that the assimilation theory does not seem very useful in diagnosing any other illusions.) But if it did, it would be just one more sense in which analytic philosophy can help us to cope with our illusions, if not lose them outright.]

Time for Biology, or Must We Burn Nagel?

 

NYU Philosopher Thomas Nagel’s new book Mind and Cosmos has faced quite a bit of criticism from reviewers so far. And perhaps that’s simply to be expected, as the book is clearly an attempt to poke holes in a standard mechanistic view of life, rather than lay out any other fully formed vision. The strength seems to lie in the possibility of starting up a conversation. The weakness, unfortunately, seems to be in the recycling of some unconvincing arguments that make that unlikely.

The key issue that I think deserves closer inspection is the concept of teleology. Nagel reaches too far into mystical territory in his attempt to incorporate a kind of final cause, but some of his critics are too quick to reject the benefit of interpreting physics with a broader scope. While functionalists, or systemic or emergence theorists, may be more aware of the larger meaning of causality, it is still the case that many philosophers express a simplistic view of matter.

The word teleology has become associated with medieval religious beliefs, and much like the word virtue, this has overshadowed the original Aristotelian meaning. Teleology, in its classic sense, does not represent God’s intention, or call for “thinking raindrops.” Instead, it is a way to look at systems rather than billiard balls. Efficient causes are those individual balls knocking into each other, the immediate chain of events that Hume so adeptly tore apart. Final causes are the overall organization of events. The heart beats because an electrical impulse occurs in your atria, but it also beats because there is a specific set of genetic codes that sets up a circulatory system. No one imagines it is mere probability that an electrical impulse happens to occur each second.

Likewise, the rain falls because the water vapor has condensed, but it also falls because it is part of a larger weather system that has a certain amount of CO2 due to the amount of greenery in the area. It falls in order to water the grass not in the sense that it intends to water the grass, but in the sense that it is part of a larger meteorological relationship, and it has become organized to water the grass which will grow to produce the right atmosphere to allow it to rain, so the grass can grow, so the rain can fall. These larger systemic views are what determine teleological causes, because they provide causes within systems, or goals that each part must play. This is distinct from the simple random movement that results from probability. It is obvious in some situations that systems exist, but sometimes we can’t see the larger system, and sometimes even when we do, we can’t explain its interdependence or unified behavior from individuated perspectives. Relying on efficient causality is thinking in terms of those interactions we see directly. Final causality means figuring out what the larger relationships are.

Now, those larger relationships may build out of smaller and more direct relationships, but a final cause is the assumption of an underlying holistic system. And if this were not the case, Zeno would be right and Einstein would be wrong; Hume’s skepticism would be validated and we truly would live in randomness – or really, we wouldn’t, as nothing would sustain itself in such a world. The primary thing about a world like this is that it is static, based only on matter but not on movement, which is to say, based only on a very abstracted and unreal form of matter that does not persist through time. Instead, the classic formation requires a final system that joins the activity of the world.

What this system is or how it works is not easily answered, but it must involve the awareness that temporality and interconnectedness are not the same as mysticism or magic. To boil all science down to a series of probabilistic events misunderstands the essential philosophical interest in understanding the bigger picture, or why the relation of cause and effect is reliable. The primary options are a metaphysics like Aristotle’s that unites being, a Humean skepticism about causality, or a Kantian idealism that attributes it to human perspective.  Contemporary philosophers often run from the metaphysical picture, preferring to accept the skeptic’s outlook with a shrug (anything’s possible, but, back to what we’ve actually seen…) or work with some kind of neo-Kantian framework (nature only looks organized to us because we’re the result of it).

But attempts to think about the unified nature of being – as seen in the history of philosophy everywhere from the ancients through thinkers as diverse as Schopenhauer, Emerson, or Heidegger – should not be dismissed as incompatible with science. Too often it is a political split instead of a truly thoughtful one that leads to the rejection of holistic accounts. What I appreciate about Nagel’s attempt here is that he is honestly thinking rather than assuming that experts have worked things out. Philosophers tend to defer to scientists in contemporary discussions, which means physicists have been doing most of the metaphysics (which has hardly made it less speculative). It seems that exploring the meaning of scientific assumptions and paradigms is exactly the area we should be in.

Questioning a mechanistic abiogenesis or natural selection may be untenable in current biological journals, but philosophy’s purview is the bigger picture, and it is healthy for us to reach beyond the curtain, not feeling constrained by what’s already been accepted. While my questions are not the same as Nagel’s (and I won’t review his case here), I am glad at least to see the connection made coherently. Writers in philosophy of mind often make arguments that seem incompatible with certain scientistic assumptions but simply do not address the issue. There are options beyond ignoring the natural sciences or demanding a boiled down, mechanical, deterministic view of life. Scientific research has inched toward more dynamic or creative ideas of natural change (like emergence, complexity theory, or neuroplasticity) and theories of holism (at least in physics) so challenges should not be associated with a rejection of investigation or an embracing of mythology. We all know philosophy is meant to begin in wonder – but perhaps that’s become too much of a cliche and not enough of a mission statement.

To thine own self be

Daniel Little leads double-life as one of the world’s most prolific philosophers of social science and author of one of the snazziest blogs on my browser start-up menu. Recently, he wrote a very interesting post on the subject of authenticity and personhood.

In that post, Little argues that the very idea of authenticity is grounded in the idea of a ‘real self’. “When we talk about authenticity, we are presupposing that a person has a real, though unobservable, inner nature, and we are asserting that he/she acts authentically when actions derive from or reflect that inner nature.” For Little, without the assumption that people have “real selves” (i.e., a set of deep characteristics that are part of a person’s inner constitution), “the idea of authenticity doesn’t have traction”. In other words: Little is saying that if we have authentic actions, then those actions must issue from our real selves.

However, Little does not think that the real self is the source of the person’s actions. “…it is plausible that an actor’s choices derive both from features of the self and the situation of action and the interplay of the actions of others. So script, response, and self all seem to come into the situation of action.”

So, by modus tollens, Little must not think there is any such thing as authentic actions.

But —- gaaah! That can’t be right! It sure looks like there is a difference between authentic and inauthentic actions. When a homophobic evangelical turns out to be a repressed homosexual, we are right to say that their homophobia was inauthentic. When someone pretends to be an expert on something they know nothing about, they are not being authentic. When a bad actor is just playing their part, Goffman-style: not authentic.

So one of the premises has to go. For my part, I would like to take issue with Little’s assertion that the idea of authenticity “has no traction” if there is no real self. I’d like to make a strong claim: I’d like to agree that the idea of a ‘real self’ is an absurdity, a non-starter, but that all the same, there is a difference between authentic and inauthentic actions. Authenticity isn’t grounded in a ‘real (psychological) self’ — instead, it’s grounded in a core self, which is both social and psychological.

46899_10101009240805711_178777880_n

If you ever have a chance to wander into the Philosophy section at your local bookstore you’ll find no shortage of books that make claims about the Real Self. A whole subgenre of the philosophy of the ‘true self’ is influenced by the psychodynamic tradition in psychology, tracing back to the psychoanalyst D.W. Winnicott.

For the Freudians, the psyche is structured by the libido (id), which generates the self-centred ego and the sociable superego. When reading some of the works that were inspired by this tradition, I occasionally get the impression that the ‘real self’ is supposed to be a secret inner beast that lies within you, waiting to surface when the right moment comes. That ‘real self’ could be either the id, or the ego.

On one simplistic reading of Freud, the id was that inner monstrosity, and the ego was akin to the ‘false self’.* On many readings, Freud would like to reduce us all to a constellation of repressed urges. Needless to say (I hope), this reductionism is batty. You have to be cursed with a comically retrograde orientation to social life to think that people are ultimately just little Oedipal machines.

Other theorists (more plausibly) seem to want to say that the ego is hidden beneath the superego — as if the conscience were a polite mask, and the ego were your horrible true face. But I doubt that the ego counts as your ‘real self’, understood in that way. I don’t think that the selfish instincts operate in a quasi-autonomous way from the social ones, and I don’t think we have enough reason to think that the selfish instincts are developmentally prior to the selfish ones. Recent research done by Michael Tomasello has suggested that our pro-social instincts are just as basic and natural as the selfish ones. If that is right, then we can’t say that the ego is the ‘real self’, and the superego is the facade.

222642_10101008464636161_693606489_n

All the same, we ought to think that there is such a thing as an ‘authentic self’. After all, it looks as though we all have fixed characteristics that are relatively stable over time, and that these characteristics reliably ground our actions in a predictable way. I think it can be useful, and commonsensical, to understand some of these personality traits as authentic parts of a person’s character.

On an intuitive level, there seem to be two criteria for authenticity which distinguish it from inauthentic action. First, drawing on work by Harry Frankfurt, we expect that authenticity should involve wholeheartedness — which is a sense of complacency with certain kinds of actions, beliefs, and orientation towards states of affairs. Second, those traits should be presented honestly, and in line with the actual beliefs that the actor has about the traits and where they come from. And notice that both of these ideas, wholeheartedness and honesty, make little or no allusion to Freudian psychology, or to a mysterious inner nature.

So the very idea of authenticity is both a social thing and a psychological thing, not either one in isolation. It makes no sense to talk about authentic real self, hidden in the miasma of the psyche. The idea is that being authentic involves doing justice to the way you’re putting yourself forward in social presentation as much as it involves introspective meditation on what you want and what you like.

By assuming that the authentic self is robustly non-social (e.g., something set apart from “responses” to others), we actually lose a grip on the very idea of authenticity. The fact is, you can’t even try to show good faith in putting things forward at face value unless you first assume that there is somebody else around to see it. Robinson Crusoe, trapped on a desert island, cannot act ‘authentically’ or ‘inauthentically’. He can only act, period.

So when Little says that “script, response, and self all seem to come into the situation of action”, I think he is saying something true, but which does not bear on the question of whether or not some action is authentic. To act authentically is to engage in a kind of social cognition. Authenticity is a social gambit, an ongoing project of putting yourself forward as a truth-teller, which is both responsive to others and grounded in projects that are central to your concern.

In this sense, even scripted actions can be authentic. “I love you” is a trope, but it’s not necessarily pretence to say it. [This possibility is mentioned at the closing of Little's essay, of course. I would like to say, though: it's more than just possible, it's how things really are.]

* This sentence was substantially altered after posting. Commenter JMRC, below, pointed out that it is probably not so easy to portray Freud in caricature.

About the author

Men, Women and Consent

A little while ago I flagged up a new interactive philosophy experiment that deals with issues of consent. It’s now been completed by well over a thousand people, and it’s throwing up some interesting results. In particular, and I can’t say I find it surprising, there seems to be a quite a large difference between how men and women view consent.

(What’s to follow will make more sense if you complete the activity before reading.)

I’ve analysed the responses to two of the scenarios featured in the experiment. The first asks whether you would be doing something wrong if you went ahead with a sexual encounter in the knowledge that your partner would almost certainly come to regret it later. The second asks whether you would be doing something wrong if you went ahead with a sexual encounter in the knowledge that your partner (a) had been drinking (albeit they remain cogent); and (b) would not have consented to the sexual encounter if they hadn’t been drinking.

The data shows that 68% of women, compared to only 58% of men, think it would be wrong to go ahead with the sexual encounter in the Future Regret case. And that 79% of women, compared to only 70% of men, think it would be wrong to go ahead in the Alcohol case.

These results are easily statistically significant, although, as always, I need to point out that the sample is not representative, and that there might be confounding variables in play (e.g., it’s possible that there are systematic differences between the sorts of males and females who have completed this activity – e.g., age).

Pain, Pills & Will

A Pain That I'm Used To

(Photo credit: Wikipedia)

There are many ways to die, but the public concern tends to focus on whatever is illuminated in the media spotlight. 2012 saw considerable focus on guns and some modest attention on a somewhat unexpected and perhaps ironic killer, namely pain medication. In the United States, about 20,000 people die each year (about one every 19 minutes) due to pain medication. This typically occurs from what is called “stacking”: a person will take multiple pain medications and sometimes add alcohol to the mix resulting in death. While some people might elect to use this as a method of suicide, most of the deaths appear to be accidental—that is, the person had no intention of ending his life.

The number of deaths is so high in part because of the volume of painkillers being consumed in the United States. Americans consume 80% of the world’s painkillers and the consumption jumped 600% from 1997 to 2007. Of course, one rather important matter is the reasons why there is such an excessive consumption of pain pills.

One reason is that doctors have been complicit in the increased use of pain medications. While there have been some efforts to cut back on prescribing pain medication, medical professionals were generally willing to write prescriptions for pain medication even in cases when such medicine was not medically necessary. This is similar to the over-prescribing of antibiotics that has come back to haunt us with drug resistant strains of bacteria. In some cases doctors no doubt simply prescribed the drugs to appease patients. In other cases profit was perhaps a motive. Fortunately, there have been serious efforts to address this matter in the medical community.

A second reason is that pharmaceutical companies did a good job selling their pain medications and encouraged doctors to prescribe them and patients to use them. While the industry had no intention of killing its customers, the pushing of pain medication has had that effect.

Of course, the doctors and pharmaceutical companies do not bear the main blame. While the companies supplied the product and the doctors provided the prescriptions, the patients had to want the drugs and use the drugs in order for this problem to reach the level of an epidemic.

The main causal factor would seem to be that the American attitude towards pain changed and resulted in the above mentioned 600% increase in the consumption of pain killers. In the past, Americans seemed more willing to tolerate pain and less willing to use heavy duty pain medications to treat relatively minor pains. These attitudes changed and now Americans are generally less willing to tolerate pain and more willing to turn to prescription pain killers. I regard this as a moral failing on the part of Americans.

As an athlete, I am no stranger to pain. I have suffered the usual assortment of injuries that go along with being a competitive runner and a martial artist. I also received some advanced education in pain when a fall tore my quadriceps tendon. As might be imagined, I have received numerous prescriptions for pain medication. However, I have used pain medications incredibly sparingly and if I do get a prescription filled, I usually end up properly disposing of the vast majority of the medication. I do admit that I did make use of pain medication when recovering from my tendon tear—the surgery involved a seven inch incision in my leg that cut down until the tendon was exposed. The doctor had to retrieve the tendon, drill holes through my knee cap to re-attach the tendon and then close the incision. As might be imagined, this was a source of considerable pain. However, I only used the pain medicine when I needed to sleep at night—I found that the pain tended to keep me awake at first. Some people did ask me if I had any problem resisting the lure of the pain medication (and a few people, jokingly I hope, asked for my extras). I had no trouble at all. Naturally, given that so many people are abusing pain medication, I did wonder about the differences between myself and my fellows who are hooked on pain medication—sometimes to the point of death.

A key part of the explanation is my system of values. When I was a kid, I was rather weak in regards to pain. I infer this is true of most people. However, my father and others endeavored to teach me that a boy should be tough in the face of pain. When I started running, I learned a lot about pain (I first started running in basketball shoes and got huge, bleeding blisters). My main lesson was that an athlete did not let pain defeat him and certainly did not let down the team just because something hurt. When I started martial arts, I learned a lot more about pain and how to endure it. This training instilled me with the belief that one should endure pain and that to give in to it would be dishonorable and wrong. This also includes the idea that the use of painkillers is undesirable. This was balanced by the accompanying belief, namely that a person should not needlessly injure his body. As might be suspected, I learned to distinguish between mere pain and actual damage occurring to my body.

Of course, the above just explains why I believe what I do—it does not serve to provide a moral argument for enduring pain and resisting the abuse of pain medication. What is wanted are reasons to think that my view is morally commendable and that the alternative is to be condemned. Not surprisingly, I will turn to Aristotle here.

Following Aristotle, one becomes better able to endure pain by habituation. In my case, running and martial arts built my tolerance for pain, allowing me to handle the pain ever more effectively, both mentally and physically. Because of this, when I fell from my roof and tore my quadriceps tendon, I was able to drive myself to the doctor—I had one working leg, which is all I needed. This ability to endure pain also serves me well in lesser situations, such as racing, enduring committee meetings and grading papers.

This, of course, provides a practical reason to learn to endure pain—a person is much more capable of facing problems involving pain when she is properly trained in the matter. Someone who lacks this training and ability will be at a disadvantage when facing situations involving pain and this could prove harmful or even fatal. Naturally, a person who relies on pain medication to deal with pain will not be training themselves to endure. Rather, she will be training herself to give in to pain and become dependent on medication that will become increasingly ineffective. In fact, some people end up becoming even more sensitive to pain because of their pain medication.

From a moral standpoint, a person who does not learn to endure pain properly and instead turns unnecessarily to pain medication is doing harm to himself and this can even lead to an untimely death. Naturally, as Aristotle would argue, there is also an excess when it comes to dealing with pain: a person who forces herself to endure pain beyond her limits or when doing so causes actually damage is not acting wisely or virtuously, but self-destructively. This can be used in a utilitarian argument to establish the wrongness of relying on pain medication unnecessarily as well as the wrongness of enduring pain stupidly. Obviously, it can also be used in the context of virtue theory: a person who turns to medication too quickly is defective in terms of deficiency; one who harms herself by suffering beyond the point of reason is defective in terms of excess.

Currently, Americans are, in general, suffering from a moral deficiency in regards to the matter of pain tolerance and it is killing us at an alarming rate. As might be suspected, there have been attempts to address the matter through laws and regulations regarding pain medication prescriptions. This supplies people with a will surrogate—if a person cannot get pain medication, then she will have to endure the pain. Of course, people are rather adept at getting drugs illegally and hence such laws and regulations are of limited effectiveness.

What is also needed is a change in values. As noted above, Americans are generally less willing to tolerate even minor pains and are generally willing to turn towards powerful pain medication. Since this was not always the case, it seems clear that this could be changed via proper training and values. What people need is, as discussed in an earlier essay, training of the will to endure pain that should be endured and resist the easy fix of medication.

In closing, I am obligated to add that there are cases in which the use of pain medication is legitimate. After all, the body and will are not limitless in their capacities and there are times when pain should be killed rather than endured. Obvious cases include severe injuries and illnesses. The challenge then, is sorting out what pain should be endured and what should not. Since I am a crazy runner, I tend to err on the side of enduring pain—sometimes foolishly so. As such, I would probably not be the best person to address this matter.

My Amazon Author Page

Enhanced by Zemanta

Training the Will

In general, will is a very useful thing to have. After all, it allows a person to overcome factors that would make his decisions for him, such as pain, fear, anger, fatigue, lust or weakness. I would, of course, be remiss to not mention that the will can be used to overcome generally positive factors such as compassion, love and mercy as well. The will, as Kant noted, can apparently select good or evil with equal resolve. However, I will set aside the concern regarding the bad will and focus on training the will.

Based on my own experience, the will is rather like stamina—while people vary in what they get by nature, it can be improved by proper training. This, of course, nicely matches Aristotle’s view of the virtues.

While there are no doubt many self-help books discussing how to train the will with various elaborate and strange methods, the process is actually very straightforward and is like training any attribute. To be specific, it is mainly a matter of exercising the capacity but not doing so to excess (and thus burning out) or deficiency (and thus getting no gain). To borrow from Aristotle, one way of developing the will in regards to temperance is to practice refraining from pleasures to the proper degree (the mean) and this will help train the will. As another example, one can build will via athletic activities by continuing when pain and fatigue are pushing one to stop. Naturally, one should not do this to excess (because of the possibility of injury) nor be deficient in it (because there will be no gain).

As far as simple and easy ways to train the will, meditation and repetitive mental exercises (such as repeating prayers or simply repeated counting) seem to help in developing this attribute.

One advantage of the indirect training of the will, such as with running, is that it also tends to develop other resources that can be used in place of the will. To use a concrete example, when a person tries to get into shape to run, sticking with the running will initially take a lot of will because the pain and fatigue will begin quickly. However, as the person gets into shape it will take longer for them to start to hurt and feel fatigued. As such, the person will not need to use as much will when running (and if the person becomes a crazy runner like me, then she will need to use a lot of will to take a rest day from running). To borrow a bit from Aristotle, once a person becomes properly habituated to an activity, then the will cost of that activity becomes much less—thus making it easier to engage in that activity.  For example, a person who initially has to struggle to eat healthy food rather than junk food will find that resisting not only builds their will but also makes it easier to resist the temptations of junk.

Another interesting point of consideration is what could be called will surrogates. A will surrogate functions much like the will by allowing a person to resist factors that would otherwise “take control” of the person. However, what makes the will surrogate a surrogate is that it is something that is not actually the will—it merely serves a similar function. Having these would seem to “build the will” by providing a surrogate that can be called upon when the person’s own will is failing—sort of a mental tag team situation.

For example, a religious person could use his belief in God as a will surrogate to resist temptations forbidden by his faith, such as adultery. That is, he is able to do what he wills rather than what his lust is pushing him to do. As another example, a person might use pride or honor as will surrogates—she, for example, might push through the pain and fatigue of a 10K race because of her pride. Other emotions (such as love) and factors could also serve as will surrogates by enabling a person to do what he wills rather than what he is being pushed to do.

One obvious point of concern regarding will surrogates is that they could be seen not as allowing the person to do as he would will when he lacks his own will resources but as merely being other factors that “make the decision” for the person. For example, if a person resists having an affair with a coworker because of his religious beliefs, then it could be contended that he has not chosen to not have the affair. Rather, his religious belief (and perhaps fear of God) was stronger than his lust. If so, those who gain what appears to be willpower from such sources are not really gaining will. Rather they merely have other factors that make them do or not do things in a way that resembles the actions of the will.

My Amazon Author Page

Enhanced by Zemanta

Will

As a runner, martial artist and philosopher I have considerable interest in the matter of the will. As might be imagined, my view of the will is shaped mostly by my training and competitions. Naturally enough, I see the will from my own perspective and in my own mind. As such, much as Hume noted in his discussion of personal identity, I am obligated to note that other people might find that their experiences vary considerably. That is, other people might see their will as very different or they might even not believe that they have a will at all.

As a gamer, I also have the odd habit of modeling reality in terms of game rules and statistics—I am approaching the will in the same manner. This is, of course, similar to modeling reality in other ways, such as using mathematical models.

In my experience, my will functions as a mental resource that allows me to remain in control of my actions. To be a bit more specific, the use of the will allows me to prevent other factors from forcing me to act or not act in certain ways. In game terms, I see the will as being like “hit points” that get used up in the battle against these other factors. As with hit points, running out of “will points” results in defeat. Since this is rather abstract, I will illustrate this with two examples.

This morning (as I write this) I did my usual Tuesday work out: two hours of martial arts followed by about two hours of running. Part of my running workout  was doing hill repeats in the park—this involves running up and down the hill over and over (rather like marching up and down the square). Not surprisingly, this becomes increasingly painful and fatiguing. As such, the pain and fatigue were “trying” to stop me. I wanted to keep running up and down the hill and doing this required expending those will points. This is because without my will the pain and fatigue would stop me well before I am actually physically incapable of running anymore. Roughly put, as long as I have will points to expend I could keep running until I collapse from exhaustion. At that point no amount of will can move the muscles and my capacity to exercise my will in this matter would also be exhausted. Naturally, I know that training to the point of exhaustion would do more harm than good, so I will myself to stop running even though I desire to keep going. I also know from experience that my will can run out while racing or training—that is, I give in to fatigue or pain before my body is actually at the point of physically failing.  These occurrences are failures of will and nicely illustrate that the will can run out or be overcome.

After my run, I had my breakfast and faced the temptation of two boxes of assorted chocolates. Like all humans, I really like sugar and hence there was a conflict between my hunger for chocolate and my choice to not shove lots of extra calories and junk into my pie port. My hunger, of course, “wants” to control me. But, of course, if I yield to the hunger for chocolate then I am not in control—the desire is directing me against my will. Of course, the hunger is not going to simply “give up” and it must be controlled by expending will and doing this keeps me in control of my actions by making them my choice.

Naturally, many alternatives to the will can be presented. For example, Hobbes’ account of deliberation is that competing desires (or aversions) “battle it out”, but the stronger always wins and thus there is no matter of will or choice. However, I rather like my view more and it seems to match my intuitions and experiences.

My Amazon Author Page

Enhanced by Zemanta

Philosopher’s Carnival No. 146

Hello new friends, philosophers, and likeminded internet creatures. This month TPM is hosting the Philosopher’s Carnival.

Something feels wrong with the state of philosophy today. From whence hast this sense of ill-boding come?

For this month’s Carnival, we shall survey a selection of recent posts that are loosely arranged around the theme of existential threats to contemporary philosophy. I focus on four. Pre-theoretic intuitions seem a little less credible as sources of evidence. Talk about possible worlds seems just a bit less scientific. The very idea of rationality looks as though it is being taken over by cognate disciplines, like cognitive science and psychology. And some of the most talented philosophers of the last generation have taken up arms against a scientific theory that enjoys a strong consensus. Some of these threats are disturbing, while others are eminently solvable. All of them deserve wider attention.

1. Philosophical intuitions

Over at Psychology TodayPaul Thagard argued that armchair philosophy is dogmatic. He lists eleven unwritten rules that he believes are a part of the culture of analytic philosophy. Accompanying each of these dogmas he proposes a remedy, ostensibly from the point of view of the sciences. [Full disclosure: Paul and I know each other well, and often work together.]

Paul’s list is successful in capturing some of the worries that are sometimes expressed about contemporary analytic philosophy. It acts as a bellwether, a succinct statement of defiance. Unfortunately, I do not believe that most of the items on the list hit their target. But I do think that two points in particular cut close to the bone:

3. [Analytic philosophers believe that] People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don’t trust your intuitions.

4. [Analytic philosophers believe that] Thought experiments are a good way of generating intuitive evidence. Natural alternative: use thought experiments only as a way of generating hypotheses, and evaluate hypotheses objectively by considering evidence derived from systematic observations and controlled experiments.

From what I understand, Paul is not arguing against the classics in analytic philosophy. (e.g., Carnap was not an intuition-monger.) He’s also obviously not arguing against the influential strain of analytic philosophers that are descendants of Quine — indeed, he is one of those philosophers. Rather, I think Paul is worried that contemporary analytic philosophers have gotten a bit too comfortable in trusting their pre-theoretic intuitions when they are prompted to respond to cases for the purpose of delineating concepts.

As Catarina Dutilh Novaes points out, some recent commentators have argued that no prominent philosophers have ever treated pre-theoretic intuitions as a source of evidence. If that’s true, then it would turn out that Paul is entirely off base about the role of intuition in philosophy.

Unfortunately, there is persuasive evidence that some influential philosophers have treated some pre-theoretic intuitions as being a source of evidence about the structure of concepts. For example, Saul Kripke (in Naming & Necessity, 1972:p.42) explained that intuitiveness is the reason why there is a distinction between necessity and contingency in the first place: “Some philosophers think that something’s having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of it, myself. I really don’t know, in a way, what more conclusive evidence one can have about anything, ultimately speaking”.

2. Philosophical necessity

Let’s consider another item from Paul’s list of dogmas:

8. There are necessary truths that apply to all possible worlds. Natural alternative: recognize that it is hard enough to figure out what is true in this world, and there is no reliable way of establishing what is true in all possible worlds, so abandon the concept of necessity.

In this passage Paul makes a radical claim. He argues that we should do away with the very idea of necessity. What might he be worried about?

To make a claim about the necessity of something is to make a claim about its truth across all possible worlds. Granted, our talk about possible worlds sounds kind of spooky, but [arguably] it is really just a pragmatic intellectual device, a harmless way of speaking. If you like, you could replace the idea of a ‘possible world’ with a ‘state-space’. When computer scientists at Waterloo learn modal logic, they replace one idiom with another — seemingly without incident.

If possible worlds semantics is just a way of speaking, then it would not be objectionable. Indeed, the language of possible worlds seems to be cooked into the way we reason about things. Consider counterfactual claims, like “If Oswald hadn’t shot Kennedy, nobody else would’ve.” These claims are easy to make and come naturally to us. You don’t need a degree in philosophy to talk about how things could have been, you just need some knowledge of a language and an active imagination.

But when you slow down and take a closer look at what has been said there, you will see that the counterfactual claim involves discussion of a possible (imaginary) world where Kennedy had not been shot. We seem to be talking about what that possible world looks like. Does that mean that this other possible world is real — that we’re making reference to this other universe, in roughly the same way we might refer to the sun or the sky? Well, if so, then that sounds like it would be a turn toward spooky metaphysics.

Hence, some philosophers seem to have gone a bit too far in their enthusiasm for the metaphysics of possible worlds. As Ross Cameron reminds us, David K. Lewis argued that possible worlds are real:

For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with round squares as parts.  And so, to believe in the latter world is to believe in round squares.  And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which could not exist.  In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc.

And to make matters worse, some people even argue that impossible worlds are real, ostensibly for similar reasons. Some people…

…like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to include impossible worlds.

Much like the Red Queen, proponents of this view want to do impossible things before breakfast. The only difference is that they evidently want to keep at it all day long.

Cameron argues that there is a difference between different kinds of impossibility, and that at least one form of impossibility cannot be part of our ontology. If you’re feeling dangerous, you can posit impossible concrete things, e.g., round squares. But you cannot say that there are worlds where “2+2=5″ and still call yourself a friend of Lewis:

For Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4.

While Cameron presents us with a cogent rebuttal to the impossibilist, his objection still leaves open the possibility that there are impossible worlds — at least, so long as the impossible worlds involve exotic concrete entities like the square circle and not incoherent abstracta.

So what we need is a scientifically credible account of necessity and possibility. In a whirlwind of a post over at LessWrong, Eliezer Yudkowsky argues that when we reason using counterfactuals, we are making a mixed reference which involves reference to both logical laws and the actual world.

[I]n one sense, “If Oswald hadn’t shot Kennedy, nobody else would’ve” is a fact; it’s a mixed reference that starts with the causal model of the actual universe where [Oswald was a lone agent], and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like ‘six’ for the product of apples on the table, is not actually present anywhere in the universe.

Yudkowsky argues that this is part of what he calls the ‘great reductionist project’ in scientific explanation. For Yudkowsky, counterfactual reasoning is quite important to the project and prospects of a certain form of science. Moreover, claims about counterfactuals can even be true. But unlike Lewis, Yudkowsky doesn’t need to argue that counterfactuals (or counterpossibles) are really real. This puts Yudkowsky on some pretty strong footing. If he is right, then it is hardly any problem for science (cognitive or otherwise) if we make use of a semantics of possible worlds.

Notice, for Yudkowski’s project to work, there has to be such a thing as a distinction between abstracta and concreta in the first place, such that both are the sorts of things we’re able to refer to. But what, exactly, does the distinction between abstract and concrete mean? Is it perhaps just another way of upsetting Quine by talking about the analytic and the synthetic?

In a two-part analysis of reference [here, then here], Tristan Haze at Sprachlogik suggests that we can understand referring activity as contact between nodes belonging to distinct language-systems. In his vernacular, reference to abstract propositions involves the direct comparison of two language-systems, while reference to concrete propositions involves the coordination of systems in terms of a particular object. But I worry that unless we learn more about the causal and representational underpinnings of a ‘language-system‘, there is no principled reason that stops us from inferring that his theory of reference is actually just a comparison of languages. And if so, then it would be well-trod territory.

3. Philosophical rationality

But let’s get back to Paul’s list. Paul seems to think that philosophy has drifted too far away from contemporary cognitive science. He worries that philosophical expertise is potentially cramped by cognitive biases.

Similarly, at LessWrong, Lukeprog worries that philosophers are not taking psychology very seriously.

Because it tackles so many questions that can’t be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn’t: we generally are as “stupid and self-deceiving” as science assumes we are. We’re “predictably irrational” and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one’s rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn’t seem so. I don’t see much Kahneman & Tversky in philosophy syllabi — just light-weight “critical thinking” classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don’t like. So what’s really needed is regular habits training for genuine curiositymotivated cognition mitigation, and so on.

In some sense or other, Luke is surely correct. Philosophers really should be paying close attention to the antecedents of (ir)rationality, and really should be training their students to do exactly that. Awareness of cognitive illusions must be a part of the philosopher’s toolkit.

But does that mean that cognitive science should be a part of the epistemologist’s domain of research? The answers looks controversial. Prompted by a post by Leah LebrescoEli Horowitz at Rust Belt Philosophy argues that we also need to take care that we don’t just conflate cognitive biases with fallacies. Instead, Horowitz argues that we ought to make a careful distinction between cognitive psychology and epistemology. In a discussion of a cognitive bias that Lebresco calls the ‘ugh field’, Horowitz writes:

On its face, this sort of thing looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature… it’s something that’s relevant to producing a good reasoning environmentreviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself.

In principle, Eli’s point is sound. There is, after all, at least a superficial difference between dispositions to (in)correctness, and actual facts about (in)correctness. But even if you think he is making an important distinction, Leah seems to be making a useful practical point about how philosophers can benefit from a change in pedagogy. Knowledge of cognitive biases really should be a part of the introductory curriculum. Development of the proper reasoning environment is, for all practical purposes, of major methodological interest to those who teach how to reason effectively. So it seems that in order to do better philosophy, philosophers must be prepared to do some psychology.

4. Philosophical anti-Darwinism

The eminent philosopher Thomas Nagel recently published a critique of Darwinian accounts of evolution through natural selection. In this effort, Nagel joins Jerry Fodor and Alvin Plantiga, who have also published philosophical worries about Darwinism. The works in this subgenre have by and large been thought to be lacking in empirical and scholarly rigor. This trend has caused a great disturbance in the profession, as philosophical epistemologists and philosophers of science are especially sensitive to ridicule they face from scientists who write in the popular press.

Enter Mohan Matthen. Writing at NewAPPS, Mohan worries that some of the leading lights of the profession are not living up to expectations.

Why exactly are Alvin Plantinga and Tom Nagel reviewing each other? And could we have expected a more dismal intellectual result than Plantinga on Nagel’s Mind and Cosmos in the New Republic? When two self-perceived victims get together, you get a chorus of hurt: For recommending an Intelligent Design manifesto as Book of the Year, Plantinga moans, “Nagel paid the predictable price; he was said to be arrogant, dangerous to children, a disgrace, hypocritical, ignorant, mind-polluting, reprehensible, stupid, unscientific, and in general a less than wholly upstanding citizen of the republic of letters.”

My heart goes out to anybody who utters such a wail, knowing that he is himself held in precisely the same low esteem. My mind, however, remains steely and cold.

Plantinga writes, “Nagel supports the commonsense view that the probability of [life evolving by natural selection] in the time available is extremely low.” And this, he says, is “right on target.” This is an extremely substantive scientific claim—and given Plantinga’s mention of “genetic mutation”, “time available,” etc., it would seem that he recognizes this. So you might hope that he and Nagel had examined the scientific evidence in some detail, for nothing else would justify their assertions on this point. Sadly, neither produces anything resembling an argument for their venturesome conclusion, nor even any substantial citation of the scientific evidence. They seem to think that the estimation of such probabilities is well within the domain of a priori philosophical thought. (Just to be clear: it isn’t.)

Coda

Pre-theoretic intuitions are here to stay, so we have to moderate how we think about their evidential role. The metaphysics of modality cannot be dismissed out of hand — we need necessity. But we also need for the idea of necessity to be tempered by our best scientific practices.

The year is at its nadir. November was purgatory, as all Novembers are. But now December has arrived, and the nights have crowded out the days. And an accompanying darkness has descended upon philosophy. Though the wind howls and the winter continues unabated, we can find comfort in patience. Spring cannot be far off.

Issue No.147 of the Philosopher’s Carnival will be hosted by Philosophy & Polity. See you next year.

About the author

Over A Cliff

I’ve been doing some thinking – not a lot, obviously, because one doesn’t want to overdo that sort of thing – about the nature of informed consent. I’m curious about what people think about the following scenario, which is designed to illuminate one aspect of the phenomenon.

You’re on a cliff, and in front of you is a narrow path, to the right of which there is a sheer drop down to the sea. You’re about to choose whether to traverse this path or instead turn back and head for home, when a syringe drops from the sky injecting you with a drug that has the following effect.

You remain aware of all the reasons why the narrow path spells danger. You are also aware that normally you would be very reluctant to traverse the path. However, as a result of the drug, these things no longer have any significant motivational force – they have lost the capacity to bind your behaviour. Put simply, you know that you would be taking a risk by not turning back, but you don’t care – it doesn’t feel as if it is a big deal (although, if asked, you could explain why it was a big deal and would report that previously you would have felt it to be a big deal – but you wouldn’t care about of these things either ).

The question is whether under these circumstances any choice you make is a fully informed choice? Or, to put this question a slightly different way, if I told you that you had to make the choice under these circumstances, would you feel that you were being deprived of something central to the decision-making process?

My tentative view is that would not be a fully informed choice, even though you still have access to all the relevant information.

As I say, I’d be very curious to know what other people think about this…