Tag Archives: Psychology

Philosopher’s Carnival No. 146

Hello new friends, philosophers, and likeminded internet creatures. This month TPM is hosting the Philosopher’s Carnival.

Something feels wrong with the state of philosophy today. From whence hast this sense of ill-boding come?

For this month’s Carnival, we shall survey a selection of recent posts that are loosely arranged around the theme of existential threats to contemporary philosophy. I focus on four. Pre-theoretic intuitions seem a little less credible as sources of evidence. Talk about possible worlds seems just a bit less scientific. The very idea of rationality looks as though it is being taken over by cognate disciplines, like cognitive science and psychology. And some of the most talented philosophers of the last generation have taken up arms against a scientific theory that enjoys a strong consensus. Some of these threats are disturbing, while others are eminently solvable. All of them deserve wider attention.

1. Philosophical intuitions

Over at Psychology TodayPaul Thagard argued that armchair philosophy is dogmatic. He lists eleven unwritten rules that he believes are a part of the culture of analytic philosophy. Accompanying each of these dogmas he proposes a remedy, ostensibly from the point of view of the sciences. [Full disclosure: Paul and I know each other well, and often work together.]

Paul’s list is successful in capturing some of the worries that are sometimes expressed about contemporary analytic philosophy. It acts as a bellwether, a succinct statement of defiance. Unfortunately, I do not believe that most of the items on the list hit their target. But I do think that two points in particular cut close to the bone:

3. [Analytic philosophers believe that] People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don’t trust your intuitions.

4. [Analytic philosophers believe that] Thought experiments are a good way of generating intuitive evidence. Natural alternative: use thought experiments only as a way of generating hypotheses, and evaluate hypotheses objectively by considering evidence derived from systematic observations and controlled experiments.

From what I understand, Paul is not arguing against the classics in analytic philosophy. (e.g., Carnap was not an intuition-monger.) He’s also obviously not arguing against the influential strain of analytic philosophers that are descendants of Quine — indeed, he is one of those philosophers. Rather, I think Paul is worried that contemporary analytic philosophers have gotten a bit too comfortable in trusting their pre-theoretic intuitions when they are prompted to respond to cases for the purpose of delineating concepts.

As Catarina Dutilh Novaes points out, some recent commentators have argued that no prominent philosophers have ever treated pre-theoretic intuitions as a source of evidence. If that’s true, then it would turn out that Paul is entirely off base about the role of intuition in philosophy.

Unfortunately, there is persuasive evidence that some influential philosophers have treated some pre-theoretic intuitions as being a source of evidence about the structure of concepts. For example, Saul Kripke (in Naming & Necessity, 1972:p.42) explained that intuitiveness is the reason why there is a distinction between necessity and contingency in the first place: “Some philosophers think that something’s having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of it, myself. I really don’t know, in a way, what more conclusive evidence one can have about anything, ultimately speaking”.

2. Philosophical necessity

Let’s consider another item from Paul’s list of dogmas:

8. There are necessary truths that apply to all possible worlds. Natural alternative: recognize that it is hard enough to figure out what is true in this world, and there is no reliable way of establishing what is true in all possible worlds, so abandon the concept of necessity.

In this passage Paul makes a radical claim. He argues that we should do away with the very idea of necessity. What might he be worried about?

To make a claim about the necessity of something is to make a claim about its truth across all possible worlds. Granted, our talk about possible worlds sounds kind of spooky, but [arguably] it is really just a pragmatic intellectual device, a harmless way of speaking. If you like, you could replace the idea of a ‘possible world’ with a ‘state-space’. When computer scientists at Waterloo learn modal logic, they replace one idiom with another — seemingly without incident.

If possible worlds semantics is just a way of speaking, then it would not be objectionable. Indeed, the language of possible worlds seems to be cooked into the way we reason about things. Consider counterfactual claims, like “If Oswald hadn’t shot Kennedy, nobody else would’ve.” These claims are easy to make and come naturally to us. You don’t need a degree in philosophy to talk about how things could have been, you just need some knowledge of a language and an active imagination.

But when you slow down and take a closer look at what has been said there, you will see that the counterfactual claim involves discussion of a possible (imaginary) world where Kennedy had not been shot. We seem to be talking about what that possible world looks like. Does that mean that this other possible world is real — that we’re making reference to this other universe, in roughly the same way we might refer to the sun or the sky? Well, if so, then that sounds like it would be a turn toward spooky metaphysics.

Hence, some philosophers seem to have gone a bit too far in their enthusiasm for the metaphysics of possible worlds. As Ross Cameron reminds us, David K. Lewis argued that possible worlds are real:

For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with round squares as parts.  And so, to believe in the latter world is to believe in round squares.  And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which could not exist.  In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc.

And to make matters worse, some people even argue that impossible worlds are real, ostensibly for similar reasons. Some people…

…like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to include impossible worlds.

Much like the Red Queen, proponents of this view want to do impossible things before breakfast. The only difference is that they evidently want to keep at it all day long.

Cameron argues that there is a difference between different kinds of impossibility, and that at least one form of impossibility cannot be part of our ontology. If you’re feeling dangerous, you can posit impossible concrete things, e.g., round squares. But you cannot say that there are worlds where “2+2=5″ and still call yourself a friend of Lewis:

For Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4.

While Cameron presents us with a cogent rebuttal to the impossibilist, his objection still leaves open the possibility that there are impossible worlds — at least, so long as the impossible worlds involve exotic concrete entities like the square circle and not incoherent abstracta.

So what we need is a scientifically credible account of necessity and possibility. In a whirlwind of a post over at LessWrong, Eliezer Yudkowsky argues that when we reason using counterfactuals, we are making a mixed reference which involves reference to both logical laws and the actual world.

[I]n one sense, “If Oswald hadn’t shot Kennedy, nobody else would’ve” is a fact; it’s a mixed reference that starts with the causal model of the actual universe where [Oswald was a lone agent], and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like ‘six’ for the product of apples on the table, is not actually present anywhere in the universe.

Yudkowsky argues that this is part of what he calls the ‘great reductionist project’ in scientific explanation. For Yudkowsky, counterfactual reasoning is quite important to the project and prospects of a certain form of science. Moreover, claims about counterfactuals can even be true. But unlike Lewis, Yudkowsky doesn’t need to argue that counterfactuals (or counterpossibles) are really real. This puts Yudkowsky on some pretty strong footing. If he is right, then it is hardly any problem for science (cognitive or otherwise) if we make use of a semantics of possible worlds.

Notice, for Yudkowski’s project to work, there has to be such a thing as a distinction between abstracta and concreta in the first place, such that both are the sorts of things we’re able to refer to. But what, exactly, does the distinction between abstract and concrete mean? Is it perhaps just another way of upsetting Quine by talking about the analytic and the synthetic?

In a two-part analysis of reference [here, then here], Tristan Haze at Sprachlogik suggests that we can understand referring activity as contact between nodes belonging to distinct language-systems. In his vernacular, reference to abstract propositions involves the direct comparison of two language-systems, while reference to concrete propositions involves the coordination of systems in terms of a particular object. But I worry that unless we learn more about the causal and representational underpinnings of a ‘language-system‘, there is no principled reason that stops us from inferring that his theory of reference is actually just a comparison of languages. And if so, then it would be well-trod territory.

3. Philosophical rationality

But let’s get back to Paul’s list. Paul seems to think that philosophy has drifted too far away from contemporary cognitive science. He worries that philosophical expertise is potentially cramped by cognitive biases.

Similarly, at LessWrong, Lukeprog worries that philosophers are not taking psychology very seriously.

Because it tackles so many questions that can’t be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn’t: we generally are as “stupid and self-deceiving” as science assumes we are. We’re “predictably irrational” and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one’s rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn’t seem so. I don’t see much Kahneman & Tversky in philosophy syllabi — just light-weight “critical thinking” classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don’t like. So what’s really needed is regular habits training for genuine curiositymotivated cognition mitigation, and so on.

In some sense or other, Luke is surely correct. Philosophers really should be paying close attention to the antecedents of (ir)rationality, and really should be training their students to do exactly that. Awareness of cognitive illusions must be a part of the philosopher’s toolkit.

But does that mean that cognitive science should be a part of the epistemologist’s domain of research? The answers looks controversial. Prompted by a post by Leah LebrescoEli Horowitz at Rust Belt Philosophy argues that we also need to take care that we don’t just conflate cognitive biases with fallacies. Instead, Horowitz argues that we ought to make a careful distinction between cognitive psychology and epistemology. In a discussion of a cognitive bias that Lebresco calls the ‘ugh field’, Horowitz writes:

On its face, this sort of thing looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature… it’s something that’s relevant to producing a good reasoning environmentreviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself.

In principle, Eli’s point is sound. There is, after all, at least a superficial difference between dispositions to (in)correctness, and actual facts about (in)correctness. But even if you think he is making an important distinction, Leah seems to be making a useful practical point about how philosophers can benefit from a change in pedagogy. Knowledge of cognitive biases really should be a part of the introductory curriculum. Development of the proper reasoning environment is, for all practical purposes, of major methodological interest to those who teach how to reason effectively. So it seems that in order to do better philosophy, philosophers must be prepared to do some psychology.

4. Philosophical anti-Darwinism

The eminent philosopher Thomas Nagel recently published a critique of Darwinian accounts of evolution through natural selection. In this effort, Nagel joins Jerry Fodor and Alvin Plantiga, who have also published philosophical worries about Darwinism. The works in this subgenre have by and large been thought to be lacking in empirical and scholarly rigor. This trend has caused a great disturbance in the profession, as philosophical epistemologists and philosophers of science are especially sensitive to ridicule they face from scientists who write in the popular press.

Enter Mohan Matthen. Writing at NewAPPS, Mohan worries that some of the leading lights of the profession are not living up to expectations.

Why exactly are Alvin Plantinga and Tom Nagel reviewing each other? And could we have expected a more dismal intellectual result than Plantinga on Nagel’s Mind and Cosmos in the New Republic? When two self-perceived victims get together, you get a chorus of hurt: For recommending an Intelligent Design manifesto as Book of the Year, Plantinga moans, “Nagel paid the predictable price; he was said to be arrogant, dangerous to children, a disgrace, hypocritical, ignorant, mind-polluting, reprehensible, stupid, unscientific, and in general a less than wholly upstanding citizen of the republic of letters.”

My heart goes out to anybody who utters such a wail, knowing that he is himself held in precisely the same low esteem. My mind, however, remains steely and cold.

Plantinga writes, “Nagel supports the commonsense view that the probability of [life evolving by natural selection] in the time available is extremely low.” And this, he says, is “right on target.” This is an extremely substantive scientific claim—and given Plantinga’s mention of “genetic mutation”, “time available,” etc., it would seem that he recognizes this. So you might hope that he and Nagel had examined the scientific evidence in some detail, for nothing else would justify their assertions on this point. Sadly, neither produces anything resembling an argument for their venturesome conclusion, nor even any substantial citation of the scientific evidence. They seem to think that the estimation of such probabilities is well within the domain of a priori philosophical thought. (Just to be clear: it isn’t.)

Coda

Pre-theoretic intuitions are here to stay, so we have to moderate how we think about their evidential role. The metaphysics of modality cannot be dismissed out of hand — we need necessity. But we also need for the idea of necessity to be tempered by our best scientific practices.

The year is at its nadir. November was purgatory, as all Novembers are. But now December has arrived, and the nights have crowded out the days. And an accompanying darkness has descended upon philosophy. Though the wind howls and the winter continues unabated, we can find comfort in patience. Spring cannot be far off.

Issue No.147 of the Philosopher’s Carnival will be hosted by Philosophy & Polity. See you next year.

About the author

Four kinds of philosophical people

We’ll begin this post where I ended the last. The ideal philosopher lives up to her name by striving for wisdom. In practice, the pursuit of wisdom involves developing a sense of good judgment when tackling very hard questions. I think there are four skills involved in the achievement of good judgment: self-insight, humility, rigor, and cooperativeness.

Even so, it isn’t obvious how the philosophical ideal is supposed to model actual philosophers. Even as I was writing the last post, I had the nagging feeling that I was playing the role of publicist for philosophy. A critic might say that I set out to talk about how philosophers were people, but only ended up stating some immodest proposals about the Platonic ideal of the philosopher. The critic might ask: Why should we think that it has any pull on real philosophers? Do the best professional philosophers really conceive of themselves in this way? If I have no serious answer to these questions, then I have done nothing more than indulged in a bit of cheerleading on behalf of my beloved discipline. So I want to start to address that accusation by looking at the reputations of real philosophers.

Each individual philosopher will have their own ideas about which virtues are worth investing in and which are worth disregarding. Even the best working philosophers end up neglecting some of the virtues over the others: e.g., some philosophers might find it relatively less important to write in order to achieve consensus among their peers, and instead put accent on virtues like self-insight, humility, and rigour. Hence, we should expect philosophical genius to be correlated with predictable quirks of character which can be described using the ‘four virtues’ model. And if that is true, then we should be able to see how major figures in the history of philosophy measure up to the philosophical ideal. If the greatest philosophers can be described in light of the ideal, we should be able to say we’ve learned something about the philosophers as people.

And then I shall sing to the Austrian mountains in my best Julie Andrews vibrato: “public relations, this is not“.

—-

In my experience, many skilled philosophers who work in the Anglo-American tradition will tend to have a feverish streak. They will tend to find a research program which conforms with their intuitions (some of which may be treated as “foundational” or givens), and then hold onto that program for dear life. This kind of philosopher will change her mind only on rare occasions, and even then only on minor quibbles that do not threaten her central programme. We might call this kind of philosopher a “programmist” or “anti-skeptic, since the programmist downplays the importance of humility, and is more interested in characterizing herself in terms of the other virtues like philosophical rigour.

You could name a great many philosophers who seem to hold this character. Patricia and Paul Churchland come to mind: both have long held the view that the progress of neuroscience will require the radical reformation of our folk psychological vocabulary. However, when I try to think of a modern exemplar of this tradition, I tend to think of W.V.O. Quine, who held fast to most of his doctrinal commitments throughout his lifetime: his epistemological naturalism and holism, to take two examples. This is just to say that Quine thought that the interesting metaphysical questions were answerable by science. Refutation of the deeper forms of skepticism was not very high on Quine’s agenda; if there is a Cartesian demon, he waits in vain for the naturalist’s attention. The most attractive spin on the programmist’s way of doing things is by saying they have raised philosophy to the level of a craft, if not a science.

—-

Programmists are common among philosophers today. But if I were to take you into a time machine and introduced you to the elder philosophers, then it would be easy to lose all sense of how the moderns compare with their predecessors. The first philosophers lived in a world where science was young, if not absent altogether; there was no end of mystery to how the universe got on. For many of them, there was no denying that skepticism deserved a place at the table. From what we can tell from what they left behind, many ancient philosophers (save Aristotle and Pythagoras) did not possess the quality that we now think of as analytic rigour. The focus was, instead, of developing the right kind of life, and then — well, living it.

We might think of this as a wholly different approach to being a philosopher than our modern friend the programmist. These philosophers were self-confident and autonomous, yet had plenty to say to the skeptic. For lack of a better term, we might call this sort of philosopher a “guru” or “informalist“. The informalist trudges forward, not necessarily with the light of reason and explicit argument, but of insight and association, often expressed in aphorisms. To modern professional philosophers and academic puzzle-solvers, the guru may seem like a specialist in woo and mysticism, a peddler of non-sequiturs. Many an undergraduate in philosophy will aspire to be a guru, and endure the scorn from their peers  (often, rightly administered).

Be that as it may, some gurus end up having a vital place in the history of modern philosophy. Whenever I think of the ‘guru’ type of philosopher, I tend to think of Frederich Nietzsche — and I feel justified in saying that in part because I guess that he would have accepted the title. For Nietzsche, insight was the single most important feature of the philosopher, and the single trait which he felt was altogether lacking in his peers.

Nietzsche was a man of passion, which is the reason why he is so easily misunderstood. Also, for a variety of reasons, Nietzsche was a man who suffered from intense loneliness. (In all likelihood, the fact that he was a rampant misogynist didn’t help in that department.) But he was also a preacher’s son, his rhetoric electric, his sermons brimming with insight and even weird lapses into latent self-deprecation. Moreover, he is a man who wrote in order to be read, and who was excited by the promise of new philosophers coming out to replace old canons. In the long run, he got what he wanted; as Walter Kaufman wrote, “Nietzsche is one of the few philosophers since Plato whom large numbers of intelligent people read for pleasure”.

—-

“He has the pride of Lucifer.” — Russell on Wittgenstein

Some philosophers prefer to strike out on their own, paving an intellectual path by way of sheer stamina and force of will. We might call them the “lone wolves“. The lone wolf will often appear as a kind of contrarian with a distinctive personality. However the lone wolf is set apart from a mere devil’s advocate by virtue of the fact that she needs to pump unusually deep wellsprings of creativity and cleverness into her craft. Because she needs to strike off alone, the wolf has to be prepared to chew bullets for breakfast: there is no controversial position she is incapable of endorsing, so long as those positions qualify as valid moves in the game of giving and taking of reasons. She is out for adventure, to prove herself capable of working on her own. More than anything else, the lone wolf despises philosophical yes-men and yes-women. She has no time for the people who are satisfied by conventional wisdom — people who revere the ongoing dialectic as a sacred activity, a Great Conversation between the ages. The lone wolf says: the hell with this! These are problems, and problems are meant to be solved.

Ludwig Wittgenstein was a lone wolf, in the sense that nobody could quite refute Wittgenstein except for Wittgenstein. The philosophical monograph which made him famous, the Tractatus, began with an admission of idiosyncracy: “Perhaps this book will be understood only by someone who has himself already had the thoughts that are expressed in it—or at least similar thoughts.—So it is not a textbook.—Its purpose would be achieved if it gave pleasure to one person who read and understood it.” He was a private man, who published very little while alive, and whose positions were sometimes unclear even to his students. He was an intense man, reputed to have wielded a hot poker at one of his contemporaries. And he had an oracular style of writing — the Tractatus resembles an overlong Powerpoint presentation, while the Investigations was a free-wheeling screed. These qualities conspired to give the man himself an almost mythical quality. As Ernest Nagel wrote in 1936 (quoting a Viennese friend): “in certain circles the existence of Wittgenstein is debated with as much ingenuity as the historicity of Christ has been disputed in others”.

Wittgenstein’s work has lasting significance. His anti-private language argument is a genuine philosophical innovation, and widely celebrated as such. As such, he is the kind of philosopher that everybody has to know at least something about. But none of this came about by the power of idiosyncrasy alone. Wittgenstein achieved notoriety by demonstrating that he had a penetrating ability to go about the whole game of giving and taking reasons.

—-

“Synthesizers are necessarily dedicated to a vision of an overarching truth, and display a generosity of spirit towards at least wide swaths of the intellectual community. Each contributes partial views of reality, Aristotle emphasizes; so does Plotinus, and Proclus even more widely…” Randall Collins, The Sociology of Philosophies

Some philosophers are skilled at combining the positions and ideas that are alive in the ongoing conversation and weaving them into an overall picture. This is a kind of philosopher that we might call the “syncretist“. Much like the lone wolf, the syncretist despises unchallenged dogmatism; but unlike the lone wolf, this is not because she enjoys the prospect of throwing down the gauntlet. Rather, the syncretist enjoys the murmur of people getting along, engaged in a productive conversation. Hence, the syncretist is driven to reconcile opposing doctrines, so long as those doctrines are plausible. When she is at her best, the syncretist is able to generate a powerful synthesis out of many different puzzle pieces, allowing the conversation to become both more abstract without also becoming unintelligible. They do not just say, “Let a thousand flowers bloom” — instead, they demonstrate how the blooming of one flower only happens when in the company of others.

The only philosopher that I have met who absolutely exemplifies the spirit of the syncretist, and persuasively presents the syncretist as a virtuous standpoint in philosophy, is the Stanford philosopher Helen Longino. In my view, her book The Fate of Knowledge is a revelation.

A more infamous [example] of the syncretist, however, is Jurgen Habermas. Habermas is an under-appreciated philosopher, a figure who is widely neglected in Anglo-American philosophy departments and (for a time) was widely scorned in certain parts of Europe. True, Habermas is a difficult philosopher to read. And, in fairness, one sometimes gets the sense that his stuff is a bit too ecumenical to be motivated on its own terms. But part of what makes Habermas close to an ideal philosopher is that he is an intellectual who has read just about everything — he has partaken in wider conversations, attempting to reconcile the analytic tradition with themes that stretch far beyond its remit. Habermas also has a prodigious output: he has written on a countless variety of subjects, including speech act theory, the ethics of assertion, political legitimation, Kohlberg’s stages of moral development, collective action, critical theory and the theory of ideology, social identity, normativity, truth, justification, civilization, argumentation theory, and doubtless many other things. If a dozen people carved up his bibliography and each staked a claim to part of it, you’d end up with a dozen successful academic careers.

For some intellectuals, syncretism is hard to digest. Just as both mothers in the court of King Solomon might have felt equally betrayed, the unwilling subjects of the syncretist’s analysis may respond with ill tempers. In particular, the syncretist grates on the nerves of those who aspire to achieve the status of lone wolf intellectuals. Take two examples, mentioned by Dr. Finlayson (Sussex). On the one hand, Marxist intellectuals will sometimes like to accuse Habermas of “selling out” — for instance, because Habermas has abandoned the usual rhythms of dialectical philosophy by trying his hand at analytic philosophy. On the other hand, those in analytic philosophy are not always very happy to recognize Habermas as a precursor to the shape of analytic philosophy today. John Searle explains in an uncompromising review: “Habermas has no theory of social ontology. He has something he calls the theory of communicative action. He says that the “purpose” of language is communicative action. This is wrong. The purpose of language is to perform speech acts. His concept of communicative action is to reach agreement by rational discussion. It has a certain irony, because Habermas grew up in the Third Reich, in which there was another theory: the “leadership principle”.” I suspect that Searle got Habermas wrong, but nobody said life as a philosopher was easy.

—-

Everything I’ve said above is a cartoon sketch of some philosophical archetypes. It is worth noting, of course, that none of the philosophers I have mentioned will fit into the neat little boxes I have made for them. The vagaries of the human personality resist being reduced to archetypes. Even in the above, I cheated a little: Nietzsche is arguably as much a lone wolf as he is a guru. I also don’t mean to suggest that all professional philosophers will fit into anything quite like these categories. Some are by reputation much too close to the philosophical ideal to fit into an archetype. (Hilary Putnam comes to mind.) And other professional philosophers are nowhere close to the ideal — there is no shortage of philosophers behaving badly. I mean only to say something about how you can use the ‘four virtues’ model of wisdom to say something interesting about philosophers themselves.

(BLS Nelson is the author of this article. For more information about him, click here.)

Meaning Machines

The question of the meaning of life is an old one. However, it is unclear exactly what the question means. Normally, we have little trouble with meaning. Clouds mean rain. Joe meant to warn me. Sentences, words, signs and signals have conventional meanings. The question of the meaning of life is different. It is not simply the definition of a word that we seek. Philosophers and those drawn to various religions tend to be the ones to ask it, and the question can be taken on three levels. We can ask about the meaning of all life, of human life, and of the individual’s life.

I would argue that the question has little meaning when taken in the first two senses. Life has no meaning in itself, it is simply here in the universe. The same goes for human life considered as the life of a natural species. Species come and go in the geological record, and it is not clear what meaning they can have. However, when it comes to questioning the meaning of an individual’s life, then the question comes alive.

Notoriously, in philosophy, the question of meaning is difficult and complicated. What is the Meaning of Being? What is the Being of Meaning? What is meaning anyway? Does it even make sense to ask about the meaning of life? If we decide that the question makes sense, what sense does it make? Various ideas are at play. Anxiety appears to be the motivation.

Sometimes we are worried that all our efforts will be for nothing if life has no meaning. A meaningless life may appear pointless, ‘superfluous,’ or ‘de trop’. Existentialists and absurdist playwrights hammered away at this theme with great gusto.

The question of the meaning of the lives of humans arises more or less acutely at different historical junctures. At times of great religious devoutness, the question is less pressing. Religion has an easier time than philosophy with the question of meaning. First, in religion, the question clearly has meaning; second, the question has an answer, and that answer is a resounding ‘Yes.’ Gods or spirits render mute the question of the meaning of human life, by folding it within a higher-than-animal purpose. We may be the ones asking the question, but the answer has always been foretold. There is actually no question about the meaning of human life.

Philosophy cannot take this way out. The question is a live one. If there is no ‘foreordained’ meaning to life, then what sort of meaning is there? It is not that we have the option of living in a world totally devoid of meaning; for, if that were possible, the question of meaning would not even come up. I can only worry about the possible meaninglessness of life if I suppose or hope it might have a meaning after all.

David Hamlyn, my old supervisor in graduate school, used to remark that we get our first idea of causality from our own powers to make changes in the world around us. You want to roll a rock. You push on it and it rolls. You learn from experience which rock will roll and which will not, no matter how hard you push it. We think of causality as ‘out there’ but our understanding of the concept begins within us.

Similarly, people look for the meaning of human life, and would be glad to find it ‘out there’, ready made, a transcendent meaning that surpasses mere animal existence. This is to get things backwards. We are the ones who bring meanings into the world, and then, looking around, find them there.

Human beings are little meaning machines who cannot help but create and then leave meanings on everything that pertains to a human world. This morning I am sitting, typing on my laptop, in the courtyard of a hotel in Los Angeles, looking out on a beautiful blue-sky, palm tree morning by the pool. The only reason I am comfortable here and now, is that everything around has a fairly stable meaning. My meaning machine is turned on and working. If I were to come down suddenly with early stage dementia, and lose many of the concepts by which I now understand my being in the world, in Los Angeles, beside a hotel pool, I would be as frightened as a small child abandoned in a strange place. The interesting question is not how human life can have meaning, but how it could ever be a worry that it might have none.

Human, Really?

Kuhn used the duck-rabbit optical illusion to ...
Image via Wikipedia

Sharon Begley recently wrote an interesting article, “What’s Really Human?” In this piece, she presents her concern that American psychologists have been making hasty generalizations over the years. To be more specific, she is concerned that such researchers have been extending the results gleaned from studies of undergraduates at American universities to the entire human race. For example, findings about what college students think about self image are extended to all of humanity.

She notes that some researchers have begun to question this approach and have contended that American undergraduates are not adequate representatives of the entire human race in terms of psychology.

In one example, she considers the optical illusion involving two line segments. Although the segments have the same length, one has arrows  on the ends pointing outward and the other has the arrows pointing inward. To most American undergraduates, the one with the inward pointing arrows looks longer.  But when the San of the Kalahari, African hunter-gatherers, look at the lines, they judge them to be the same length. This is taken to reflect the differing conditions.

This result is, of course, hardly surprising. After all, people who live in different conditions will tend to have different perceptual skill sets.

Begley’s second example involves the “ultimatum game” that is typical of the tests that are intended to reveal truths about human nature via games played with money. The gist of the game is that there are two players, A and B. In this game, the experimenter gives player A $10. A then must decide how much to offer B. If B accepts the deal, they both get the money. If B rejects the deal, both leave empty handed.

When undergraduates in the States play, player A will typically offer $4-5 while those playing B will most often refuse anything below $3. This is taken as evidence that humans have evolved a sense of justice that leads us to make fair offers and also to punish unfair ones-even when doing so means a loss. According to the theorists, humans do this because we evolved in small tribal societies and social cohesion and preventing freeloaders (or free riders as they are sometimes called) from getting away with their freeloading.

As Begley points out, when “people from small, nonindustrial societies, such as the Hadza foragers of Tanzania, offer about $2.50 to the other player—who accepts it. A “universal” sense of fairness and willingness to punish injustice may instead be a vestige of living in WEIRD, market economies.”

While this does provide some evidence for Begley’s view, it does seem rather weak. The difference between the Americans and the Hadza does not seem to be one of kind (that is, Americans are motived by fairness and the Hadza are not). Rather, it seems plausible to see this is terms of quantity. After all, Americans refuse anything below $3 while the Hazda’s refusal level seems to be only 50 cents less. This difference could be explained in terms not of culture but of relative affluence. After all, to a typical American undergrad, it is no big deal to forgo $3. However, someone who has far less (as is probably the case with the Hazda) would probably be willing to settle for less.

To use an analogy, imagine playing a comparable game using food instead of money. If I had recently eaten and knew I had a meal waiting at home, I would be more inclined to punish a small offer than accept it. After all, I have nothing to lose by doing so and would gain the satisfaction of denying my “opponent” her prize. However, if we were both very hungry and I knew that my cupboards were bare, then I would be much more inclined to accept a smaller offer on the principle that some food is better than none.

Naturally, cultural factors could also play a role in determining what is fair or not. After all, if A is given the money, B might regard this as A’s property and that A is being generous in sharing anything. This would show that culture is a factor, but this is hardly a shock. The idea of a universal human nature is quite consistent with it being modified by specific conditions. After all, individual behavior is modified by such conditions. To use an obvious example, my level of generosity depends on the specifics of the situation such as the who, why, when and so on.

There is also the broader question of whether such money games actually reveal truths about justice and fairness. This topic goes beyond the scope of this brief essay, however.

Begley finishes her article by noting that “the list of universals-that-aren’t kept growing.” That is, allegedly universal ways of thinking and behaving have been found to not be so universal after all.

This shows that contemporary psychology is discovering what Herodotus noted thousands of years ago, namely that “custom is king” and what the Sophists argued for, namely relativism. Later thinkers, such as Locke and other empiricists, were also critical of the idea of universal (specifically innate) ideas. In contrast, thinkers such as Descartes and Leibniz argued for the idea of universal (specifically innate) ideas.

I am not claiming that these thinkers are right (or wrong), but it certainly interesting to see that these alleged “new discoveries” in psychology are actually very, very old news. What seems to be happening in this cutting edge psychology is a return to the rationalist and empiricist battles over the innate content of the mind (or lack thereof).

Enhanced by Zemanta

Self Interest

Thomas Hobbes (1588-1679)

Image via Wikipedia

One general challenge is getting people to act properly. What counts as proper behavior is, of course, a rather contentious matter. However, it seems reasonable to believe that at the most basic level harming others is not proper behavior.

It can be argued that self interest will motivate people to act properly. The stock argument (which is based on Hobbes, Locke, and Smith) is that a rational person will realize that behaving badly is not in his self interest because the consequences to himself will be negative.

Naturally, a person might be tempted to act badly if she thinks she can avoid these consequences, which is why it is rather important to make sure that these consequences are rather difficult to avoid. In addition to this concern, there are also other concerns about self-interest as a regulating factor on bad behavior.

First, for self-interest to be a regulating factor, a person’s self interest must coincide with acting correctly. If a person’s self-interest (or what he believed is his self-interest) goes against acting correctly, then he will be inclined to act incorrectly. Not surprisingly, various philosophers have tried to argue that what is truly in a person’s self interest is to act correctly. While there are some good arguments (such as those presented by Socrates) for this view, there are also good arguments that this is not the case. Naturally, from a purely practical standpoint the trick is to get people to believe that their self-interest coincides with not acting badly.

Second, even if it is assumed that it is in a person’s interest to act correctly this will not motivate a person to act correctly unless a person knows what is in her self-interest. While it is tempting to assume that a person automatically knows what is in her self interest, this need not be the case. After all, a person can think that something is in her best interest, yet be mistaken about this. A person might be misled by his emotions, confused or wrong about the facts (to give but a few examples).

Third, even if it is assumed that a person knows what is in her self-interest and that it is in her self-interest to act correctly, there is still the question of whether the person will chose to act in accord with her self-interest or not. To use a simple example, a person might know that exercising is in her self-interest, but be unable to stick with exercising. Roughly put, a person might have knowledge but lack the will or motivation to act on this knowledge.

Thus, self-interest can play a role in regulating behavior-provided that it in accord with correct behavior, the person has knowledge and the will to act on this knowledge.

Reblog this post [with Zemanta]

Being a Man I: Social Construct

my 1960s wedding suit

Apparently some men are having trouble figuring out what it is to be a man. There are various groups and individuals that purport to be able to teach men how to be men (or at least dress like the male actors on the show Mad Men).

Before a person can become a man, it must be known what it is to be a man.  There are, of course, many conceptions about what it is to be a man.

One option is to take the easy and obvious approach: just go with the generally accepted standards of  society. After all, a significant part of being a man is being accepted as a man by other people.

On a large scale, each society has a set of expectations, stereotypes and assumptions about what it is to be a man. These can be taken as forming a set of standards regarding what one needs to be and do in order to be  a man.

Naturally, there will be conflicting (even contradictory) expectations so that meeting the standards for being a man will require selecting a specific subset. One option is to select the ones that are accepted by the majority or by the dominant aspect of the population. This has the obvious advantage that this sort of manliness will be broadly accepted.

Another option is to narrow the field by selecting the standards held by a specific group. For example, a person in a fraternity might elect to go with the fraternities view of what it is to be a man (which will probably involve the mass consumption of beer). On the plus side, this enables a person to clearly  be a man in that specific group. On the minus side, if the standards (or mandards) of the group differ in significant ways from the more general view of manliness, then the individual can run into problems if he strays outside of his mangroup.

A third option is to attempt to create your own standards of being a man and getting them accepted by others (or not). Good luck with that.

Of course, there is also the question of whether there is something more to being a man above and beyond the social construction of manliness. For some theorists, gender roles and identities are simply that-social constructs. Naturally, there is also the biological matter of being a male, but being biologically male and being a man are two distinct matters. There is a clear normative aspect to being a man and merely a biological aspect to being male.

If being a man is purely a matter of social construction (that is, we create and make up gender roles) than being a man in group X simply involves meeting the standards of being a man in group X. If that involves owing guns, killing animals, and chugging beer while watching porn and sports, then do that to be a man. If it involves sipping lattes, talking about Proust,  listening to NPR  and talking about a scrumptious quiche, then do that. So, to be a man, just pick your group, sort out the standards and then meet them as best you can.

In many ways, this is comparable to being good: if being good is merely a social construct, then to be good you just meet the moral standards of the group in question. This is, of course, classic relativism (and an approach endorsed by leading sophists).

But perhaps being a man is more than just meeting socially constructed gender standards. If so, a person who merely meets the “mandards” of being a man in a specific group might think he is a man, but he might be mistaken. But, this is a matter for another time.

Reblog this post [with Zemanta]

The Running Gender Mystery

Since I am a runner (well, returning to running as my tendon heals), I pay some attention to news about the sport. One thing I like about the coverage is that it tends to involve less controversy and bad news than other sports. Of course, running is not free of such controversy as a recent incident attests.

Semenya, a South African runner, is currently the world’s champion in the women’s 800 meter race. The controversy is that it has apparently been claimed that she is not a woman. The basis of this is that her testosterone levels were tested at three times the normal level. She has also been under observation since her racing ability has made incredible advances in a relatively short time. Since natural improvements are generally gradual in nature, this raised suspicions.

One reply that has been given to the charge that “she is actually a he” is that Semenya certainly seems to be a female.

This sports controversy also raises a controversy over the nature of gender. Presumably Semenya appears to be a female (it has been implied that sort of check has been done). However, there are cases in which a person looks like a female yet is genetically male. This is complete androgen insensitivity syndrome and is more common than one might expect. Such people have higher testosterone levels than “normal” women because they have testes (albeit not descended). I must emphasize that I am not making any claims about Semenya, I am merely bringing this up for the sake of the discussion.

Since human societies are generally built around an obsession about gender identity and divisions, this syndrome does create some difficulties. If the syndrome is discovered when the child is young, there is the option of assigning a gender through the use of medical means (including surgery). In some cases, the procedure is delayed until the child can make his/her own decision.

Sports are, of course, not free from the gender obsession. Of course,  the concern over gender can be seen as quite reasonable. After all, males have a general physical advantage over females and for sports to be fair, males should be distinguished from females. This seems to be morally on par with divisions based on age (like age groups in road races) and weight (like in boxing). However, if someone looks like a women yet has male genes (and the higher testosterone) then that person might be seen as having an unfair advantage over “normal” women. Of course, such a person might be at a disadvantage relative to “normal” male athletes.

One way to deal with this sort of concern would be to determine the degree to which a person with this syndrome has an advantage over “normal” woman in regards to athletic competition. If such an advantage exists and places the person into the male range, then it would seem to be unfair to allow the person to compete against “normal” women. Of course, if people are to be tested to determine how they fall on the competitive spectrum, then fairness would seem to require that all athletes be tested and grouped based on their capabilities rather than on gender. Of course, practical concerns (costs, for example) would make this sort of testing and sorting very unlikely. As such, the sorting of folks by gender is likely to remain the standard in sports.  Of course, this approach is the cause of the difficulty in the matter at hand.

Because sorting is and will remain gender based, it seems most reasonable to allow a person with the syndrome to compete as the gender they have chosen (or been assigned). It is not a perfect solution, but seems to be the fairest approach. Naturally, the person would have to be “established” in the gender rather than simply deciding to be, for example, a woman for the purposes of competition after having lived as a male.

Of course, some “normal” women have naturally high levels of testosterone. This can presumably provide some women with an advantage over other women, but this would not be cheating. After all, some people are born with better lung capacity or more efficient muscles and this is not cheating.

It must be said, of course, that a person might also have unusual high levels of testosterone due to the use of synthetic testosterone as a steroid to increase athletic performance. If this is the case, then the ethics of the situation are quite clear-such cheating is morally unacceptable in sports.

Reblog this post [with Zemanta]

Michael Jackson & Proper Emotions

I was recently asked how Michael Jackson’s death affected me. I had to be honest and report that it really had not impacted my life. I did feel a degree of pity. But, I would feel the same upon learning about the death of anyone who did not deserve to die.

In contrast to my rather limited response, some fans have shown incredible pain at his loss. From their responses, one would think that they had lost a parent, husband or dear friend.  My initial view was that they were overreacting and that their emotional response was simply not warranted or proper. This, naturally enough, started me thinking about whether my view had any actual merit or if I was simply engaged in biased thinking. In order to help settle this, I started by by considering the basis of my own rather limited feelings about his death and why I took his fans to be having improper emotions. In addition to dealing specifically with the matter at hand, this discussion also deals with the broader topic of proper emotions (or emotional responses, if you prefer).

In my own case, I like some of his music and I thought Thriller had a rather kick ass video (especially since it had Vincent Price).  However, I am not related to him I never met him in person, and never even exchanged emails with him. As such, I have no meaningful connection to him that would warrant a powerful emotional response to his untimely death. For me to react in a powerful way to his death would thus be improper, in that my response would far outweigh what I should be feeling. It would, to use an analogy, be like howling in pain because I merely pricked my finger. That sort of overreaction is not, as Aristotle might say, the right degree of emotion to feel for that situation.  This is not to say that his death was on par with the pricking of a finger, just that his role in my life was extremely limited (seeing a few videos and hearing some songs).

From my perspective, the fans who are emotionally devastated by his death are overreacting. After all, most of them had most likely not even met him in person. At most, they might have seen him on stage during a live show.  That hardly constitutes a meaningful connection between two people that would warrant such an extreme response. In my own case, I only form strong attachments to people I actually know and expect the attachment to be reciprocated.  Otherwise, the relationship would seem to be something of an illusion and a fantasy. But, perhaps that is a harsh thing to say.  So, what I feel upon the death of another person depends on the relationship we had. If there was no meaningful relationship, then it would not be  a proper reaction to feel terrible grief upon that person’s death. I should, of course, feel for other people-but my response should be a proper response, a fitting measure of grief for what has been lost to me.

One response to my view that his fans attached great importance to him and he was somehow very significant in their lives. Some people can form such one way emotional bonds to someone who would not know them from Adam or Eve. As such, his loss would hurt them deeply and thus it could be argued that their reactions are quite justified and proper. After all, people do get emotionally attached even to objects (such as cars or jewelry) and the loss of such items greatly upset them. Obviously, the objects cannot love people back. Likewise, one might argue, a person could be quite emotionally attached to the image or idea of a celebrity and thus feel a terrible loss when that person dies.

In reply, it seems unreasonable to get so emotionally attached to objects. They are, after all, objects. Likewise, for a fan to get emotionally attached to a celebrity seems to be unreasonable. It is not that the celebrity is not a person, but that the typical fan is not interacting with the person. Rather, they are merely experiencing the celebrity’s public presentation. In the case of Jackson, his fans saw his videos, listened to his music, watched the TV coverage of his life, and perhaps saw him in stage or caught a glimpse of him in public. What they became attached to was not the person-for they knew not the person. Rather, they became attached to that public presentation. As such, when he died they did not lose him-they never had him. What they lost, to be rather rough about it, is the chance to hear new songs, see new videos, and see live shows. They can still experience almost all that they experienced of him by watching the videos or playing his music.  As such, even though he is dead, their relationship can continue almost unchanged. As such, extreme grief hardly seems warranted.

Of course, an even easier response to my view is to just say that people feel what they do and there is no right or wrong when it comes to emotions. That does have a certain appeal, but is easily countered. For example, if a child is killed in car wreck and an onlooker started laughing about it and making jokes, we would certainly say that it was not right for him to feel that way about the death of a child.

It might be claimed that I am a cold person who is unable to appreciate the loss experienced by Jackson’s devoted fans. Who am I, one might say, to judge their grief and tears as proper or improper? An excellent question, to which I give an obvious reply: if I am not to judge them, then I am not to be judged for judging them.

Reblog this post [with Zemanta]

Emotion and Ethics

Paging through last week’s Newsweek, I came across Sharon Begley’s article “Adventures in Good and Evil.” I found the article rather interesting and, shockingly enough, have some things to say about it.

Begley accepts a current popular view of ethics: it is rooted in evolution and grounded in emotions.  She briefly runs through the stock argument for the claim that morality is an evolved behavior. Roughly put, the argument is that our primate relatives show what we would consider altruistic behavior (like helping each other or enduring hardship to avoid harming others of their kind). Naturally, the primates are more altruistic with their relatives. It is assumed that our primate ancestors had this same sort of behavior and it helped them survive, thus leading to us and our ethical behavior.

Perhaps this “just so” story is true.  Let us allow that it is.

Begley then turns to the second assumption, that ethics is more a matter of “gut emotion”  than “rational, analytic thought.” Using a stock Philosophy 101 example, she writes:

“If people are asked whether they would be willing to throw a switch to redirect deadly fumes from a room with five children to a room with one, most say yes, and neuroimaging shows that their brain‘s rational, analytical regions had swung into action to make the requisite calculation. But few people say they would kill a healthy man in order to distribute his organs to five patients who will otherwise die, even though the logic—kill one, save five—is identical: a region in our emotional brain rebels at the act of directly and actively taking a man’s life, something that feels immeasurably worse than the impersonal act of throwing a switch in an air duct. We have gut feelings of what is right and what is wrong.”

Begley’s reasoning is, of course, that since the logic is identical, it follows that the different judgments in the cases must be based in emotion rather than reason. While her view is reasonable, I disagree with her on two points:  I believe that the logic is not actually identical and that her explanation of the distinction between the two cases is mistaken. Obviously enough, I need to make a case for this.

While the logic of the two cases is similar, the logic only becomes identical if the cases  are considered in a rather abstract manner.  To be specific, the logic is identical if we only consider that the agent is choosing between five deaths or one. If this fact were the only morally relevant fact about the situations, then the logic would indeed be identical (because the situations would be identical). However, there certainly seem to be morally relevant distinctions between the two cases.

One obvious distinction is the oft discussed letting die versus killing. In the first case, the agent has a role to play in who dies. However, the agent is not killing the children. Rather, s/he is deciding who the gas will kill. In the second case, if the agent does nothing, then s/he lets one person die. If she acts, then she kills a person. Since this distinction has been discussed in great length by other philosophers I will not go beyond saying that it is reasonable to take this to be a morally relevant distinction. Hence, it is reasonable to consider the possibility that the cases are not identical-and hence that the logic is not identical. If this is the case, then the distinction in the positions need not be explained in terms of a gut reaction-it could be the result of a rational assessment of the moral distinction between killing and letting die.

Another matter worth considering in regards to the logic is that of moral theories. When I teach my ethics class, I use the same sort of examples that Begley employs: I contrast a case in which the agent must chose who dies with a case in which the agent must chose between killing or letting die. Naturally enough, I use a case like Begley’s first case to illustrate how our moral intuitions match utilitarianism: if we cannot save everyone, then we are inclined to chose more over less. However, I do not use the second case to illustrate that ethics is a matter of a gut reaction. Rather, I use it to show that we also have moral intuitions that in some cases it is not the consequences that matter. Rather, we have intuitions that certain actions “just aren’t right.” Naturally, I use this sort of example in the context of discussing deontology in general and Kant‘s moral theory in particular. In the case at hand, it need not be a gut reaction that causes the agent to balk at killing an innocent person so as to scrap him for parts. On Kant’s view, reason would inform the agent that he must treat rational beings as ends and not simply as means. To murder a man for his organs, even to save five people, would be to treat him as a means and not an end. Hence, it would be an immoral action. There is, obviously enough, no appeal to the gut here and the logic of the cases would be different.

Other moral approaches would also ground the distinction without an appeal to the gut. For example, my religious students often point out that murdering someone would be an evil act because it violates God’s law. In this case. the appeal is not to the gut but to God’s law. As another example, a rule-utilitarian approach would also ground the distinction. After all, the practice of murdering people to use as parts would create more unhappiness than happiness-people would worry that they would be the next person being cut to pieces.  In both of these examples the logic of the two cases is not identical and there is no appeal to the gut.

Naturally, it is reasonable to consider the role of emotions in moral decision making. Obviously, most people feel bad about murder and this no doubt plays a role in their view of the second case. However, to simply assume that the distinction is exhausted by the emotional explanation is clearly a mistake. After all, a person can clearly regard murdering one person to save five as immoral without relying on a gut reaction. It could, in fact, be a rational assessment of the situation.