Category Archives: Psychology

The Teenage Mind & Decision Making

http://www.gettyimages.com/detail/163207027

One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.

Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.

Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.

Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.

It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.

Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.

Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Maleficent & Rape: Rape Culture

Maleficent's dragon form as it appears in the ...

Maleficent’s dragon form as it appears in the climax of the film. (Photo credit: Wikipedia)

In my previous essay I focused on the matter of metaphors in the context of Hayley Krischer’s claim that the movie Maleficent includes a rape scene. In this essay I will take on a rather more controversial matter, namely the question of why it might matter as to whether the movie contains the alleged rape scene or not. This might result in some hostile responses.

It might be wondered what taking the scene as a metaphor (or implied) rape adds to the work. One might say “Maleficent is betrayed and mutilated—what does adding the idea that this is a rape metaphor add? Does not the betrayal and mutilation suffice to serve the purpose of the narrative or does it need to be believed that this is a metaphorical rape?”

One way to answer the question would be to focus on aesthetic matters: does accepting the rape metaphor enhance the aesthetic value of the work? That is, is it a better film on that interpretation? If the answer is “yes”, then that provides an aesthetic reason to accept that interpretation. However, if this does not improve the aesthetic value of the film, then it would not provide a compelling reason for that interpretation over the alternative.

Another way to answer the question is to look at it in terms of academic value. That is, taking it as a metaphor for rape provides an insight into an important truth—the most likely truth being the existence of a pervasive rape culture.

However, there are risks in embracing a view on academic grounds. One common risk is that theorists often accept a beloved theory as an intellectual version of the ring of power: the one theory to explain it all. It could be objected that taking what happens in Maleficent to be rape (rather than something horrible but not-rape) it expands the definition of “rape” to encompass ever more and thus validates the rape-culture theory by redefinition.

However, there appears to be an abundance of evil that does not seem to be driven by the motive to rape—unless all evil is the result of some sort of Freudian sublimation. This is, of course, not impossible and might even be true. But, being too enamored of a theory can easily blind one—wearing the goggles of matriarchy can blind one as effectively as the goggles of the patriarchy (which allow people to use phrases like “legitimate rape” and really mean it).

Another way to look at the matter is in terms of ideological value. In this case, taking what happens as a metaphor for rape provides support for an ideology—most likely that regarding an ideology that includes a belief in a pervasive rape culture. By expanding the definition of “rape”, rape expands within the culture—thus making the case that there is a pervasive rape culture. However, there is the legitimate concern as to whether or not such expanded definitions are accurate.

People seek evidence for their ideology (or deny evidence against it) and can do so in ways that are not consistent with critical thinking—a subject I examined in some detail in another essay. The risk, as always, is that people accept something as true because they believe it is true, rather than believing it because it has been shown to be true.

It might be contended that taking an academic or ideological interpretation of Maleficent is harmless and that debating its accuracy is pointless. However, I contend that overuse of the notion of rape culture is problematic. To show this, I will turn to the murders allegedly committed by Elliot Rodger.

In response to Rodger’s alleged murder of three men and two women, Salon editor Joan Walsh asserted that “the widespread recognition that Elliot Rodger’s killing spree was the tragic result of misogyny and male entitlement has been a little bit surprising, and encouraging.” Even self-proclaimed nerds have bought into this notion, apparently not realizing the significance of the fact that three of the victims were men—rather odd targets for someone driven by misogyny and male entitlement.

While in many cases the motives of alleged killers are not known, Rodger wrote a lengthy manifesto that allows an in-depth look at his professed motives.

Fellow philosopher Jean Kazez has analyzed the text of Eliot Rodger’s manifesto and presents the view that while Rodger eventually adopted misogynistic views, these were late in the development of his hatred. Her view is supported by text taken from his manifesto and it seems clear that his views that are characterized as misogynistic are the terrible fruit of his previous hatreds.

Kazez notes that “But if you read this manifesto, what seems much more overwhelming is the overall pattern of hate, envy, loneliness, resentment, sadness, hopelessness, craving for status, humiliation, despair, etc.  So it is baffling to me that we’ve settled on misogyny as key to understanding why this happened.”

While I share her bafflement, I can suggest three possible explanations. The first, and easiest, is that the modern news media generally prefers a simple narrative and it tends to get easily caught up in social media trends. The idea that Rodger (allegedly) killed because he is a misogynist is a simple narrative and one that started to trend on social media like Twitter.

The second is that there is an academic commitment in some circles to the rape-culture theory that includes as essential components views about misogyny and male entitlement. Given a pre-existent commitment to this theory and the conformation bias that all people are subject to, it is no surprise that there would be a focus on this one small part of his manifesto.

The third is that there is also a commitment in some circles to the rape-culture ideology (which is distinct from the academic theory). As with the theory, people who accept this ideology are subject to the confirmation bias. In addition, there are the usual perils of ideology and belief. As such, it is certainly to be expected that there would be considerable focus on those small parts of his manifesto.

Serving to reinforce the theory and the ideology is the fact that a critical assessment of either can be met with considerable hostility. Some might also suspect that certain men publicly support the ideology or theory due to a desire to appear to be appropriately sensitive men.

As a final point, it might be wondered why being critical of such theory and ideology matters. The easy and obvious answer is that the danger of excessively focusing on the rape culture idea is that doing so can easily lead to ignoring all the other causal factors that contribute to evil actions. To use the obvious analogy, if it is assumed that a factor is a cause of a broad range of diseases when it is not, then trying to prevent those diseases by focusing on that factor will fail. In regards to the specific matter, addressing the rape-culture will not fix the ills that it does not cause. This is not to say that rape culture is not worth addressing—there are horrific and vile aspects to our culture that directly contribute to rape and these should be addressed with an intent to eliminate.

There is, of course, also the matter of truth: getting things right matters. As such, I freely admit I could be wrong about all this and I welcome, as always, criticism.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Defining Our Gods

The theologian Alvin Plantinga was interviewed for The Stone this weekend, making the claim that Atheism is Irrational. His conclusion, however, seems to allow that agnosticism is pretty reasonable, and his thought process is based mostly on the absurdity of the universe and the hope that some kind of God will provide an explanation for whatever we cannot make sense of. These attitudes seem to me to require that we clarify a few things.

There are a variety of different intended meanings behind the word “atheist” as well as the word “God”. I generally make the point that I am atheistic when it comes to personal or specific gods like Zeus, Jehovah, Jesus, Odin, Allah, and so on, but agnostic if we’re talking about deism, that is, when it comes to an unnamed, unknowable, impersonal, original or universal intelligence or source of some kind. If this second force or being were to be referred to as “god” or even spoken of through more specific stories in an attempt to poetically understand some greater meaning, I would have no trouble calling myself agnostic as Plantinga suggests. But if the stories or expectations for afterlife or instructions for communications are meant to be considered as concrete as everyday reality, then I simply think they are as unlikely as Bigfoot or a faked moon landing – in other words, I am atheistic.

There are atheists who like to point out that atheism is ultimately a lack of belief, and therefore as long as you don’t have belief, you are atheistic – basically, those who have traditionally been called agnostics are just as much atheists. The purpose of this seems to be to expand the group of people who will identify more strongly as non-believers, and to avoid nuance – or what might be seen as hesitation – in self-description.

However, this allows for confusion and unnecessary disagreement at times. I think in fact that there are a fair number of people who are atheistic when it comes to very literal gods, like the one Ken Ham was espousing in his debate with Bill Nye. Some people believe, as Ken Ham does, that without a literal creation, the whole idea of God doesn’t make sense, and so believe in creationism because they believe in God. Some share this starting point, but are convinced by science and conclude there is no god. But others reject the premise and don’t connect their religious positions with their understandings of science. It’s a popular jab among atheists that “everyone is atheistic when it comes to someone else’s gods”, but it’s also a useful description of reality. We do all choose to not believe certain things, even if we would not claim absolute certainty.

Plenty of us would concede that only math or closed systems can be certain, so it’s technically possible that any conspiracy theory or mythology at issue is actually true – but still in general it can be considered reasonable not to believe conspiracy theories or mythologies. And if one includes mainstream religious mythologies with the smaller, less popular, less currently practiced ones, being atheistic about Jesus (as a literal, supernatural persona) is not that surprising from standard philosophical perspectives. The key here is that the stories are being looked at from a materialistic point of view – as Hegel pointed out, once spirituality is asked to compete in an empirical domain, it has no chance. It came about to provide insight, meaning, love and hope – not facts, proof, and evidence.

The more deeply debatable issue would be a broadly construed and non-specific deistic entity responsible for life, intelligence or being. An argument can be made that a force of this kind provides a kind of unity to existence that helps to make sense of it. It does seem rather absurd that the universe simply happened, although I am somewhat inclined to the notion that the universe is just absurd. On the other hand, perhaps there is a greater order that is not always evident. I would happily use the word agnostic to describe my opinion about this, and the philosophical discussion regarding whether there is an originating source or natural intelligence to being seems a useful one. However, it should not be considered to be relevant to one’s opinion about supernatural personas who talk to earthlings and interfere in their lives.

There are people who identify as believers who really could be categorized as atheistic in the same way I am about the literal versions of their gods. They understand the stories of their religions as pathways to a closer understanding of a great unspecified deity, but take them no more literally than Platonists take the story of the Cave, which is to say, the stories are meant to be meaningful and the concrete fact-based aspect is basically irrelevant. It’s not a question of history or science: it’s metaphysics. Let’s not pretend any of us know the answer to this one.

Losing your illusions

Analytic philosophy has been enormously influential in part because it has been an enormous philosophical success. Consider the following example. Suppose it were argued that God must exist, because we can meaningfully refer to Him, and reference can only work so long as a person refers to something real. Once upon a time, something like that argument struck people as a pretty powerful argument. But today, the analytic philosopher may answer: “We have been misled by our language. When we speak of God, we are merely asserting that some thing fits a certain description, and not actually referring to anything.” That is the upshot of Russell’s theory of descriptions, and it did its part in helping to disarm a potent metaphysical illusion.

Sometimes progress in philosophy occurs in something like this way. Questions are not resolutely answered, once and for all — instead, sometimes an answer is proposed which is sufficiently motivating that good-faith informed parties stop asking the incipient question. Consider, for instance, the old paradox, “If a tree falls in the forest, and no-one is around, does it make a sound?” If you make a distinction between primary and secondary qualities, then the answer is plainly “No”: for while sounds are observer-dependent facts, the vibration of molecules would happen whether or not anyone was present. If you rephrase the question in terms of the primary qualities (“If a tree falls in the forest, and no-one is around, do air molecules vibrate?”), then the answer is an obvious “yes”. A new distinction has helped us to resolve an old problem. It is a dead (falsidical) paradox: something that seems internally inconsistent, but which just turns into a flat-out absurdity when put under close scrutiny.

Interesting as those examples are, it is also possible that linguistic analysis can help us resolve perceptual illusions. Consider the image below (the Muller Lyer illusion, taken from the Institut Nicod‘s great Cognition and Culture lab). Now answer: “Which line is longer?”

mullerlyer-illusia

Fig. 1. Which line is longer?

Most participants will agree that the top line appears longer than the bottom one, despite the fact that they are ostensibly the same length. It is an illusion.

Illusions are supposed to be irresolvable conflicts between how things seem to you. For example, a mirage is an illusion, because if you stand in one place, then no matter how you present the stimuli to yourself, it will look as though a cloudy water puddle is hovering there somewhere in the distance. The mirage will persist regardless of how you examine it or think about it. There is no linguistic-mental switch you can flip inside your brain to make the mirage go away. Analytic philosophers can’t help you with that. (Similarly, I hold out no hopes that an analytic philosopher’s armchair musings will help to figure out the direction of spin for this restless ballerina.)

However, as a matter of linguistic analysis, it is not unambiguously true that the lines are the same length in the Muller-Lyer illusion. Oftentimes, the concept of a “line” is not operationally defined. Is a line just whatever sits horizontally? Or is a line whatever is distinctively horizontal (i.e., whatever is horizontal, such that it is segmented away from the arrowhead on each end)? Let’s call the former a “whole line”, and the latter a “line segment”. Of the two construals, it seems to me that it is best to interpret a line as meaning “the whole line”, because that is just the simplest reading (i.e., it doesn’t rely on arbitrary judgments about “what counts as distinctive”). But at the end of the day, both of those interpretations are plausible readings of the meaning of ‘line’, but we’re not told which definition we ought to be looking for.

I don’t know about you, but when I concentrate on framing the question in terms of whole lines, the perceptual illusion outright disappears. When asked, “Is one horizontal-line longer than the other?”, my eyes focus on the white space between the horizontal lines, and my mind frames the two lines as a vibrant ‘equals sign’ that happens to be bookended by some arrowheads in my peripheral vision. So the answer to the question is a clear “No”. By contrast, when asked, “Is one line-segment longer than the other?”, my eyes focus on the points at the intersection of each arrowhead, and compare them. And the answer is a modest “Yes, they seem to be different lengths” — which is consistent with the illusion as it has been commonly represented.

Now for the interesting part.

Out of curiosity, I measured both lines according to both definitions (as whole lines and as line segments). In the picture below, the innermost vertical blue guidelines map onto the ends of the line segments, while the outermost vertical blue guidelines map onto the edges of the bottom line:

Screen Shot 2013-04-28 at 6.12.15 PM

Fig 2. Line segments identical, whole lines different.

Once I did this, I came up with a disturbing realization: the whole lines in the picture I took from the Institut Nicod really are different lengths! As you can see, the very tips of the bottom whole line fail to align with the inner corner of the top arrow.

As a matter of fact, the bottom whole line is longer than the top whole line. This is bizarre, since the take-home message of the illusion is usually supposed to be that the lines are equal in length. But even when I was concentrating on the whole lines (looking at the white space between them, manifesting an image of the equals sign), I didn’t detect that the bottom line was longer, and probably would not have even noticed it had it not been for the fact that I had drawn vertical blue guidelines in (Fig.2). Still, when people bring up the Muller Lyer illusion, this is not the kind of illusion that they have in mind.

(As an aside: this is not just a problem with the image chosen from Institut Nicod. Many iterations of the illusion face the same or similar infelicities. For example, in the three bottom arrows image on this Wikipedia image, you will see that a vertical dotted guideline is drawn which compares whole lines to line segments. This can be demonstrated by looking at the blue guidelines I superimposed on the image here.)

Can the illusion be redrawn, such that it avoids the linguistic confusion? Maybe. At the moment, though, I’m not entirely sure. Here is an unsatisfying reconstruction of the Nicod image, where both line segment and whole line are of identical length for both the top arrow and the bottom one:

mullerlyer-illusia2

Fig 3. Now the two lines are truly equal (both as whole lines and as segments).

Unfortunately, when it comes to Fig. 3., I find that I’m no longer able to confidently state that one line looks longer than the other. At least at the moment, the illusion has disappeared.

Part of the problem may be that I had to thicken the arrowheads of the topmost line in order to keep them equal, both as segments and as wholes. Unfortunately, the line thickening may have muddied the illusion. Another part of the problem is that, at this point, I’ve stared at Muller-Lyer illusions for so long today that I am starting to question my own objectivity in being able to judge lines properly.

[Edit 4/30: Suppose that other people are like me, and do not detect any illusion in (Fig. 3). One might naturally wonder why that might be.

Of course, there are scientific explanations of the phenomenon that don’t rely on anything quite like analytic philosophy. (e.g., you might reasonably think that the difference is that our eyes are primed to see in three dimensions, and that since the thicker arrows appear to be closer to the eye than the thin ones, it disposes the mind to interpret the top line as visually equal to the bottom one. No linguistic analysis there.) But another possibility is that our vision of the line segment is perceptually contaminated by our vision of the whole line, owing to the gestalt properties of visual perception. This idea, or something like it, already exists in the literature in the form of assimilation theory. If so, then we observers really do profit from making an analytic distinction between whole lines and line segments in order to help diagnose the causal mechanisms responsible for this particular illusion — albeit, not to make it disappear.

Anyway. If this were a perfect post, I would conclude by saying that linguistic analysis can help us shed light on at least some perceptual illusions, and not just dismantle paradoxes. Mind you, at the moment, I don’t know if this conclusion is actually true. (It does not bode well that the assimilation theory does not seem very useful in diagnosing any other illusions.) But if it did, it would be just one more sense in which analytic philosophy can help us to cope with our illusions, if not lose them outright.]

Time for Biology, or Must We Burn Nagel?

 

NYU Philosopher Thomas Nagel’s new book Mind and Cosmos has faced quite a bit of criticism from reviewers so far. And perhaps that’s simply to be expected, as the book is clearly an attempt to poke holes in a standard mechanistic view of life, rather than lay out any other fully formed vision. The strength seems to lie in the possibility of starting up a conversation. The weakness, unfortunately, seems to be in the recycling of some unconvincing arguments that make that unlikely.

The key issue that I think deserves closer inspection is the concept of teleology. Nagel reaches too far into mystical territory in his attempt to incorporate a kind of final cause, but some of his critics are too quick to reject the benefit of interpreting physics with a broader scope. While functionalists, or systemic or emergence theorists, may be more aware of the larger meaning of causality, it is still the case that many philosophers express a simplistic view of matter.

The word teleology has become associated with medieval religious beliefs, and much like the word virtue, this has overshadowed the original Aristotelian meaning. Teleology, in its classic sense, does not represent God’s intention, or call for “thinking raindrops.” Instead, it is a way to look at systems rather than billiard balls. Efficient causes are those individual balls knocking into each other, the immediate chain of events that Hume so adeptly tore apart. Final causes are the overall organization of events. The heart beats because an electrical impulse occurs in your atria, but it also beats because there is a specific set of genetic codes that sets up a circulatory system. No one imagines it is mere probability that an electrical impulse happens to occur each second.

Likewise, the rain falls because the water vapor has condensed, but it also falls because it is part of a larger weather system that has a certain amount of CO2 due to the amount of greenery in the area. It falls in order to water the grass not in the sense that it intends to water the grass, but in the sense that it is part of a larger meteorological relationship, and it has become organized to water the grass which will grow to produce the right atmosphere to allow it to rain, so the grass can grow, so the rain can fall. These larger systemic views are what determine teleological causes, because they provide causes within systems, or goals that each part must play. This is distinct from the simple random movement that results from probability. It is obvious in some situations that systems exist, but sometimes we can’t see the larger system, and sometimes even when we do, we can’t explain its interdependence or unified behavior from individuated perspectives. Relying on efficient causality is thinking in terms of those interactions we see directly. Final causality means figuring out what the larger relationships are.

Now, those larger relationships may build out of smaller and more direct relationships, but a final cause is the assumption of an underlying holistic system. And if this were not the case, Zeno would be right and Einstein would be wrong; Hume’s skepticism would be validated and we truly would live in randomness – or really, we wouldn’t, as nothing would sustain itself in such a world. The primary thing about a world like this is that it is static, based only on matter but not on movement, which is to say, based only on a very abstracted and unreal form of matter that does not persist through time. Instead, the classic formation requires a final system that joins the activity of the world.

What this system is or how it works is not easily answered, but it must involve the awareness that temporality and interconnectedness are not the same as mysticism or magic. To boil all science down to a series of probabilistic events misunderstands the essential philosophical interest in understanding the bigger picture, or why the relation of cause and effect is reliable. The primary options are a metaphysics like Aristotle’s that unites being, a Humean skepticism about causality, or a Kantian idealism that attributes it to human perspective.  Contemporary philosophers often run from the metaphysical picture, preferring to accept the skeptic’s outlook with a shrug (anything’s possible, but, back to what we’ve actually seen…) or work with some kind of neo-Kantian framework (nature only looks organized to us because we’re the result of it).

But attempts to think about the unified nature of being – as seen in the history of philosophy everywhere from the ancients through thinkers as diverse as Schopenhauer, Emerson, or Heidegger – should not be dismissed as incompatible with science. Too often it is a political split instead of a truly thoughtful one that leads to the rejection of holistic accounts. What I appreciate about Nagel’s attempt here is that he is honestly thinking rather than assuming that experts have worked things out. Philosophers tend to defer to scientists in contemporary discussions, which means physicists have been doing most of the metaphysics (which has hardly made it less speculative). It seems that exploring the meaning of scientific assumptions and paradigms is exactly the area we should be in.

Questioning a mechanistic abiogenesis or natural selection may be untenable in current biological journals, but philosophy’s purview is the bigger picture, and it is healthy for us to reach beyond the curtain, not feeling constrained by what’s already been accepted. While my questions are not the same as Nagel’s (and I won’t review his case here), I am glad at least to see the connection made coherently. Writers in philosophy of mind often make arguments that seem incompatible with certain scientistic assumptions but simply do not address the issue. There are options beyond ignoring the natural sciences or demanding a boiled down, mechanical, deterministic view of life. Scientific research has inched toward more dynamic or creative ideas of natural change (like emergence, complexity theory, or neuroplasticity) and theories of holism (at least in physics) so challenges should not be associated with a rejection of investigation or an embracing of mythology. We all know philosophy is meant to begin in wonder – but perhaps that’s become too much of a cliche and not enough of a mission statement.

To thine own self be

Daniel Little leads double-life as one of the world’s most prolific philosophers of social science and author of one of the snazziest blogs on my browser start-up menu. Recently, he wrote a very interesting post on the subject of authenticity and personhood.

In that post, Little argues that the very idea of authenticity is grounded in the idea of a ‘real self’. “When we talk about authenticity, we are presupposing that a person has a real, though unobservable, inner nature, and we are asserting that he/she acts authentically when actions derive from or reflect that inner nature.” For Little, without the assumption that people have “real selves” (i.e., a set of deep characteristics that are part of a person’s inner constitution), “the idea of authenticity doesn’t have traction”. In other words: Little is saying that if we have authentic actions, then those actions must issue from our real selves.

However, Little does not think that the real self is the source of the person’s actions. “…it is plausible that an actor’s choices derive both from features of the self and the situation of action and the interplay of the actions of others. So script, response, and self all seem to come into the situation of action.”

So, by modus tollens, Little must not think there is any such thing as authentic actions.

But —- gaaah! That can’t be right! It sure looks like there is a difference between authentic and inauthentic actions. When a homophobic evangelical turns out to be a repressed homosexual, we are right to say that their homophobia was inauthentic. When someone pretends to be an expert on something they know nothing about, they are not being authentic. When a bad actor is just playing their part, Goffman-style: not authentic.

So one of the premises has to go. For my part, I would like to take issue with Little’s assertion that the idea of authenticity “has no traction” if there is no real self. I’d like to make a strong claim: I’d like to agree that the idea of a ‘real self’ is an absurdity, a non-starter, but that all the same, there is a difference between authentic and inauthentic actions. Authenticity isn’t grounded in a ‘real (psychological) self’ — instead, it’s grounded in a core self, which is both social and psychological.

46899_10101009240805711_178777880_n

If you ever have a chance to wander into the Philosophy section at your local bookstore you’ll find no shortage of books that make claims about the Real Self. A whole subgenre of the philosophy of the ‘true self’ is influenced by the psychodynamic tradition in psychology, tracing back to the psychoanalyst D.W. Winnicott.

For the Freudians, the psyche is structured by the libido (id), which generates the self-centred ego and the sociable superego. When reading some of the works that were inspired by this tradition, I occasionally get the impression that the ‘real self’ is supposed to be a secret inner beast that lies within you, waiting to surface when the right moment comes. That ‘real self’ could be either the id, or the ego.

On one simplistic reading of Freud, the id was that inner monstrosity, and the ego was akin to the ‘false self’.* On many readings, Freud would like to reduce us all to a constellation of repressed urges. Needless to say (I hope), this reductionism is batty. You have to be cursed with a comically retrograde orientation to social life to think that people are ultimately just little Oedipal machines.

Other theorists (more plausibly) seem to want to say that the ego is hidden beneath the superego — as if the conscience were a polite mask, and the ego were your horrible true face. But I doubt that the ego counts as your ‘real self’, understood in that way. I don’t think that the selfish instincts operate in a quasi-autonomous way from the social ones, and I don’t think we have enough reason to think that the selfish instincts are developmentally prior to the selfish ones. Recent research done by Michael Tomasello has suggested that our pro-social instincts are just as basic and natural as the selfish ones. If that is right, then we can’t say that the ego is the ‘real self’, and the superego is the facade.

222642_10101008464636161_693606489_n

All the same, we ought to think that there is such a thing as an ‘authentic self’. After all, it looks as though we all have fixed characteristics that are relatively stable over time, and that these characteristics reliably ground our actions in a predictable way. I think it can be useful, and commonsensical, to understand some of these personality traits as authentic parts of a person’s character.

On an intuitive level, there seem to be two criteria for authenticity which distinguish it from inauthentic action. First, drawing on work by Harry Frankfurt, we expect that authenticity should involve wholeheartedness — which is a sense of complacency with certain kinds of actions, beliefs, and orientation towards states of affairs. Second, those traits should be presented honestly, and in line with the actual beliefs that the actor has about the traits and where they come from. And notice that both of these ideas, wholeheartedness and honesty, make little or no allusion to Freudian psychology, or to a mysterious inner nature.

So the very idea of authenticity is both a social thing and a psychological thing, not either one in isolation. It makes no sense to talk about authentic real self, hidden in the miasma of the psyche. The idea is that being authentic involves doing justice to the way you’re putting yourself forward in social presentation as much as it involves introspective meditation on what you want and what you like.

By assuming that the authentic self is robustly non-social (e.g., something set apart from “responses” to others), we actually lose a grip on the very idea of authenticity. The fact is, you can’t even try to show good faith in putting things forward at face value unless you first assume that there is somebody else around to see it. Robinson Crusoe, trapped on a desert island, cannot act ‘authentically’ or ‘inauthentically’. He can only act, period.

So when Little says that “script, response, and self all seem to come into the situation of action”, I think he is saying something true, but which does not bear on the question of whether or not some action is authentic. To act authentically is to engage in a kind of social cognition. Authenticity is a social gambit, an ongoing project of putting yourself forward as a truth-teller, which is both responsive to others and grounded in projects that are central to your concern.

In this sense, even scripted actions can be authentic. “I love you” is a trope, but it’s not necessarily pretence to say it. [This possibility is mentioned at the closing of Little’s essay, of course. I would like to say, though: it’s more than just possible, it’s how things really are.]

* This sentence was substantially altered after posting. Commenter JMRC, below, pointed out that it is probably not so easy to portray Freud in caricature.

About the author

Men, Women and Consent

A little while ago I flagged up a new interactive philosophy experiment that deals with issues of consent. It’s now been completed by well over a thousand people, and it’s throwing up some interesting results. In particular, and I can’t say I find it surprising, there seems to be a quite a large difference between how men and women view consent.

(What’s to follow will make more sense if you complete the activity before reading.)

I’ve analysed the responses to two of the scenarios featured in the experiment. The first asks whether you would be doing something wrong if you went ahead with a sexual encounter in the knowledge that your partner would almost certainly come to regret it later. The second asks whether you would be doing something wrong if you went ahead with a sexual encounter in the knowledge that your partner (a) had been drinking (albeit they remain cogent); and (b) would not have consented to the sexual encounter if they hadn’t been drinking.

The data shows that 68% of women, compared to only 58% of men, think it would be wrong to go ahead with the sexual encounter in the Future Regret case. And that 79% of women, compared to only 70% of men, think it would be wrong to go ahead in the Alcohol case.

These results are easily statistically significant, although, as always, I need to point out that the sample is not representative, and that there might be confounding variables in play (e.g., it’s possible that there are systematic differences between the sorts of males and females who have completed this activity – e.g., age).

Pain, Pills & Will

A Pain That I'm Used To

(Photo credit: Wikipedia)

There are many ways to die, but the public concern tends to focus on whatever is illuminated in the media spotlight. 2012 saw considerable focus on guns and some modest attention on a somewhat unexpected and perhaps ironic killer, namely pain medication. In the United States, about 20,000 people die each year (about one every 19 minutes) due to pain medication. This typically occurs from what is called “stacking”: a person will take multiple pain medications and sometimes add alcohol to the mix resulting in death. While some people might elect to use this as a method of suicide, most of the deaths appear to be accidental—that is, the person had no intention of ending his life.

The number of deaths is so high in part because of the volume of painkillers being consumed in the United States. Americans consume 80% of the world’s painkillers and the consumption jumped 600% from 1997 to 2007. Of course, one rather important matter is the reasons why there is such an excessive consumption of pain pills.

One reason is that doctors have been complicit in the increased use of pain medications. While there have been some efforts to cut back on prescribing pain medication, medical professionals were generally willing to write prescriptions for pain medication even in cases when such medicine was not medically necessary. This is similar to the over-prescribing of antibiotics that has come back to haunt us with drug resistant strains of bacteria. In some cases doctors no doubt simply prescribed the drugs to appease patients. In other cases profit was perhaps a motive. Fortunately, there have been serious efforts to address this matter in the medical community.

A second reason is that pharmaceutical companies did a good job selling their pain medications and encouraged doctors to prescribe them and patients to use them. While the industry had no intention of killing its customers, the pushing of pain medication has had that effect.

Of course, the doctors and pharmaceutical companies do not bear the main blame. While the companies supplied the product and the doctors provided the prescriptions, the patients had to want the drugs and use the drugs in order for this problem to reach the level of an epidemic.

The main causal factor would seem to be that the American attitude towards pain changed and resulted in the above mentioned 600% increase in the consumption of pain killers. In the past, Americans seemed more willing to tolerate pain and less willing to use heavy duty pain medications to treat relatively minor pains. These attitudes changed and now Americans are generally less willing to tolerate pain and more willing to turn to prescription pain killers. I regard this as a moral failing on the part of Americans.

As an athlete, I am no stranger to pain. I have suffered the usual assortment of injuries that go along with being a competitive runner and a martial artist. I also received some advanced education in pain when a fall tore my quadriceps tendon. As might be imagined, I have received numerous prescriptions for pain medication. However, I have used pain medications incredibly sparingly and if I do get a prescription filled, I usually end up properly disposing of the vast majority of the medication. I do admit that I did make use of pain medication when recovering from my tendon tear—the surgery involved a seven inch incision in my leg that cut down until the tendon was exposed. The doctor had to retrieve the tendon, drill holes through my knee cap to re-attach the tendon and then close the incision. As might be imagined, this was a source of considerable pain. However, I only used the pain medicine when I needed to sleep at night—I found that the pain tended to keep me awake at first. Some people did ask me if I had any problem resisting the lure of the pain medication (and a few people, jokingly I hope, asked for my extras). I had no trouble at all. Naturally, given that so many people are abusing pain medication, I did wonder about the differences between myself and my fellows who are hooked on pain medication—sometimes to the point of death.

A key part of the explanation is my system of values. When I was a kid, I was rather weak in regards to pain. I infer this is true of most people. However, my father and others endeavored to teach me that a boy should be tough in the face of pain. When I started running, I learned a lot about pain (I first started running in basketball shoes and got huge, bleeding blisters). My main lesson was that an athlete did not let pain defeat him and certainly did not let down the team just because something hurt. When I started martial arts, I learned a lot more about pain and how to endure it. This training instilled me with the belief that one should endure pain and that to give in to it would be dishonorable and wrong. This also includes the idea that the use of painkillers is undesirable. This was balanced by the accompanying belief, namely that a person should not needlessly injure his body. As might be suspected, I learned to distinguish between mere pain and actual damage occurring to my body.

Of course, the above just explains why I believe what I do—it does not serve to provide a moral argument for enduring pain and resisting the abuse of pain medication. What is wanted are reasons to think that my view is morally commendable and that the alternative is to be condemned. Not surprisingly, I will turn to Aristotle here.

Following Aristotle, one becomes better able to endure pain by habituation. In my case, running and martial arts built my tolerance for pain, allowing me to handle the pain ever more effectively, both mentally and physically. Because of this, when I fell from my roof and tore my quadriceps tendon, I was able to drive myself to the doctor—I had one working leg, which is all I needed. This ability to endure pain also serves me well in lesser situations, such as racing, enduring committee meetings and grading papers.

This, of course, provides a practical reason to learn to endure pain—a person is much more capable of facing problems involving pain when she is properly trained in the matter. Someone who lacks this training and ability will be at a disadvantage when facing situations involving pain and this could prove harmful or even fatal. Naturally, a person who relies on pain medication to deal with pain will not be training themselves to endure. Rather, she will be training herself to give in to pain and become dependent on medication that will become increasingly ineffective. In fact, some people end up becoming even more sensitive to pain because of their pain medication.

From a moral standpoint, a person who does not learn to endure pain properly and instead turns unnecessarily to pain medication is doing harm to himself and this can even lead to an untimely death. Naturally, as Aristotle would argue, there is also an excess when it comes to dealing with pain: a person who forces herself to endure pain beyond her limits or when doing so causes actually damage is not acting wisely or virtuously, but self-destructively. This can be used in a utilitarian argument to establish the wrongness of relying on pain medication unnecessarily as well as the wrongness of enduring pain stupidly. Obviously, it can also be used in the context of virtue theory: a person who turns to medication too quickly is defective in terms of deficiency; one who harms herself by suffering beyond the point of reason is defective in terms of excess.

Currently, Americans are, in general, suffering from a moral deficiency in regards to the matter of pain tolerance and it is killing us at an alarming rate. As might be suspected, there have been attempts to address the matter through laws and regulations regarding pain medication prescriptions. This supplies people with a will surrogate—if a person cannot get pain medication, then she will have to endure the pain. Of course, people are rather adept at getting drugs illegally and hence such laws and regulations are of limited effectiveness.

What is also needed is a change in values. As noted above, Americans are generally less willing to tolerate even minor pains and are generally willing to turn towards powerful pain medication. Since this was not always the case, it seems clear that this could be changed via proper training and values. What people need is, as discussed in an earlier essay, training of the will to endure pain that should be endured and resist the easy fix of medication.

In closing, I am obligated to add that there are cases in which the use of pain medication is legitimate. After all, the body and will are not limitless in their capacities and there are times when pain should be killed rather than endured. Obvious cases include severe injuries and illnesses. The challenge then, is sorting out what pain should be endured and what should not. Since I am a crazy runner, I tend to err on the side of enduring pain—sometimes foolishly so. As such, I would probably not be the best person to address this matter.

My Amazon Author Page

Enhanced by Zemanta

Training the Will

In general, will is a very useful thing to have. After all, it allows a person to overcome factors that would make his decisions for him, such as pain, fear, anger, fatigue, lust or weakness. I would, of course, be remiss to not mention that the will can be used to overcome generally positive factors such as compassion, love and mercy as well. The will, as Kant noted, can apparently select good or evil with equal resolve. However, I will set aside the concern regarding the bad will and focus on training the will.

Based on my own experience, the will is rather like stamina—while people vary in what they get by nature, it can be improved by proper training. This, of course, nicely matches Aristotle’s view of the virtues.

While there are no doubt many self-help books discussing how to train the will with various elaborate and strange methods, the process is actually very straightforward and is like training any attribute. To be specific, it is mainly a matter of exercising the capacity but not doing so to excess (and thus burning out) or deficiency (and thus getting no gain). To borrow from Aristotle, one way of developing the will in regards to temperance is to practice refraining from pleasures to the proper degree (the mean) and this will help train the will. As another example, one can build will via athletic activities by continuing when pain and fatigue are pushing one to stop. Naturally, one should not do this to excess (because of the possibility of injury) nor be deficient in it (because there will be no gain).

As far as simple and easy ways to train the will, meditation and repetitive mental exercises (such as repeating prayers or simply repeated counting) seem to help in developing this attribute.

One advantage of the indirect training of the will, such as with running, is that it also tends to develop other resources that can be used in place of the will. To use a concrete example, when a person tries to get into shape to run, sticking with the running will initially take a lot of will because the pain and fatigue will begin quickly. However, as the person gets into shape it will take longer for them to start to hurt and feel fatigued. As such, the person will not need to use as much will when running (and if the person becomes a crazy runner like me, then she will need to use a lot of will to take a rest day from running). To borrow a bit from Aristotle, once a person becomes properly habituated to an activity, then the will cost of that activity becomes much less—thus making it easier to engage in that activity.  For example, a person who initially has to struggle to eat healthy food rather than junk food will find that resisting not only builds their will but also makes it easier to resist the temptations of junk.

Another interesting point of consideration is what could be called will surrogates. A will surrogate functions much like the will by allowing a person to resist factors that would otherwise “take control” of the person. However, what makes the will surrogate a surrogate is that it is something that is not actually the will—it merely serves a similar function. Having these would seem to “build the will” by providing a surrogate that can be called upon when the person’s own will is failing—sort of a mental tag team situation.

For example, a religious person could use his belief in God as a will surrogate to resist temptations forbidden by his faith, such as adultery. That is, he is able to do what he wills rather than what his lust is pushing him to do. As another example, a person might use pride or honor as will surrogates—she, for example, might push through the pain and fatigue of a 10K race because of her pride. Other emotions (such as love) and factors could also serve as will surrogates by enabling a person to do what he wills rather than what he is being pushed to do.

One obvious point of concern regarding will surrogates is that they could be seen not as allowing the person to do as he would will when he lacks his own will resources but as merely being other factors that “make the decision” for the person. For example, if a person resists having an affair with a coworker because of his religious beliefs, then it could be contended that he has not chosen to not have the affair. Rather, his religious belief (and perhaps fear of God) was stronger than his lust. If so, those who gain what appears to be willpower from such sources are not really gaining will. Rather they merely have other factors that make them do or not do things in a way that resembles the actions of the will.

My Amazon Author Page

Enhanced by Zemanta

Will

As a runner, martial artist and philosopher I have considerable interest in the matter of the will. As might be imagined, my view of the will is shaped mostly by my training and competitions. Naturally enough, I see the will from my own perspective and in my own mind. As such, much as Hume noted in his discussion of personal identity, I am obligated to note that other people might find that their experiences vary considerably. That is, other people might see their will as very different or they might even not believe that they have a will at all.

As a gamer, I also have the odd habit of modeling reality in terms of game rules and statistics—I am approaching the will in the same manner. This is, of course, similar to modeling reality in other ways, such as using mathematical models.

In my experience, my will functions as a mental resource that allows me to remain in control of my actions. To be a bit more specific, the use of the will allows me to prevent other factors from forcing me to act or not act in certain ways. In game terms, I see the will as being like “hit points” that get used up in the battle against these other factors. As with hit points, running out of “will points” results in defeat. Since this is rather abstract, I will illustrate this with two examples.

This morning (as I write this) I did my usual Tuesday work out: two hours of martial arts followed by about two hours of running. Part of my running workout  was doing hill repeats in the park—this involves running up and down the hill over and over (rather like marching up and down the square). Not surprisingly, this becomes increasingly painful and fatiguing. As such, the pain and fatigue were “trying” to stop me. I wanted to keep running up and down the hill and doing this required expending those will points. This is because without my will the pain and fatigue would stop me well before I am actually physically incapable of running anymore. Roughly put, as long as I have will points to expend I could keep running until I collapse from exhaustion. At that point no amount of will can move the muscles and my capacity to exercise my will in this matter would also be exhausted. Naturally, I know that training to the point of exhaustion would do more harm than good, so I will myself to stop running even though I desire to keep going. I also know from experience that my will can run out while racing or training—that is, I give in to fatigue or pain before my body is actually at the point of physically failing.  These occurrences are failures of will and nicely illustrate that the will can run out or be overcome.

After my run, I had my breakfast and faced the temptation of two boxes of assorted chocolates. Like all humans, I really like sugar and hence there was a conflict between my hunger for chocolate and my choice to not shove lots of extra calories and junk into my pie port. My hunger, of course, “wants” to control me. But, of course, if I yield to the hunger for chocolate then I am not in control—the desire is directing me against my will. Of course, the hunger is not going to simply “give up” and it must be controlled by expending will and doing this keeps me in control of my actions by making them my choice.

Naturally, many alternatives to the will can be presented. For example, Hobbes’ account of deliberation is that competing desires (or aversions) “battle it out”, but the stronger always wins and thus there is no matter of will or choice. However, I rather like my view more and it seems to match my intuitions and experiences.

My Amazon Author Page

Enhanced by Zemanta