Tag Archives: Human - Page 2

Should Killer Robots be Banned?

The Terminator.

The Terminator. (Photo credit: Wikipedia)

You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.

-Will Rogers

 

Humans have been using machines to kill each other for centuries and these machines have become ever more advanced and lethal. In more recent decades there has been considerable focus on developing autonomous weapons. That is, weapons that can locate and engage the enemy on their own without being directly controlled by human beings. The crude seeking torpedoes of World War II are an example of an early version of such a killer machine. Once fired, the torpedo would be guided by acoustic sensors to its target and then explode—it was a crude, suicidal mechanical shark. Of course, this weapon had very limited autonomy since humans decided when to fire it and at what target.

Thanks to advances in technology, far greater autonomy is now possible. One peaceful example of this is the famous self-driving cars. While some see them as privacy killing robots, they are not designed to harm people—quite the opposite, in fact. However, it is easy to see how the technology used to guide a car safely around people, animals and other vehicles could be used to guide an armed machine to its targets.

Not surprisingly, some people are rather concerned about the possibility of killer robots, or with less hyperbole, autonomous weapon systems. Recently there has been a push to ban such weapons by international treaty. While people are no doubt afraid of killer machines roaming about due to science fiction stories and movies, there are legitimate moral, legal and practical grounds for such a ban.

One concern is that while autonomous weapons might be capable of seeking out and engaging targets, they would lack the capability to make the legal and moral decisions needed to operate within the rules of war. As a specific example, there is the concern that a killer robot will not be able to distinguish between combatants and non-combatants as reliably as a human being. As such, autonomous weapon systems could be far more likely than human combatants to kill noncombatants due to improper classification.

One obvious reply is that while there are missions in which the ability to make such distinctions would be important, there are others where it would not be required on the part of the autonomous weapon. If a robot infantry unit were engaged in combat within a populated city, then it would certainly need to be able to make such a distinction. However, just a human bomber crew sent on a mission to destroy a factory would not be required to make such distinctions, an autonomous bomber would not need to have this ability. As such, this concern only has merit in cases in which such distinctions must be made and could be reasonably made by a human in the same situation. Thus, a sweeping ban on autonomous weapons would not be warranted by this concern.

A second obvious reply is that this is a technical problem that could be solved to a degree that would make an autonomous weapon at least as reliable as an average human soldier in making the distinction between combatants and non-combatants. It seems likely that this could be done given that the objective is a human level of reliability. After all, humans in combat do make mistakes in this matter so the bar is not terribly high.  As such, banning such weapons would seem to be premature—it would need to be shown that such weapons could not make this distinction as well as an average human in the same situation.

A second concern is based on the view that the decision to kill should be made by a human being and not by a machine. Such a view could be based on an abstract view about the moral right to make killing decisions or perhaps on the view that humans would be more merciful than machines.

One obvious reply is that autonomous weapons are still just weapons. Human leaders will, presumably, decide when they are deployed and give them their missions. This is analogous to a human firing a seeking missile—the weapon tracks and destroys the intended target, but the decision that someone should die was made by a human. Presumably humans would be designing the decision making software for the machines and they could program in a form of digital mercy—if desired.

There is, of course, the science fiction concern that the killer machines will become completely autonomous and fight their own wars (as in Terminator and “Second Variety”). The concern about rogue systems is worth considering, but is certainly a tenuous basis for a ban on autonomous weapons.

Another obvious reply is that while a machine would probably lack mercy, they would also lack anger and hate. As such, they might actually be less awful about killing than humans.

A third concern is based on the fact that autonomous machines are just machines without will or choice (which might also be true of humans). As such, wicked or irresponsible leaders could acquire autonomous weapons that will simply do what they are ordered to do, even if that involves slaughtering children.

The obvious, but depressing, reply to this is that such leaders seem to never want for people to do bidding, however awful that bidding might be. Even a cursory look at the history of war and terrorism shows that this is a terrible truth. As such, autonomous weapons do not seem to pose a special danger in this regard: anyone who could get an army of killer robots would almost certainly be able to get an army of killer humans.

There is, of course, a legitimate concern that autonomous weapons could be hacked and used by terrorists or other bad people. However, this would be the same as such people getting access to non-autonomous weapons and using them to hurt and kill people.

In general, the moral motivation of the people who oppose autonomous weapons is laudable. They presumable wish to cut down on death and suffering. However, this goal seems to be better served by the development of autonomous weapons. Some reasons for this are as follows.

First, since autonomous weapons are not crewed, their damage or destruction will not result in harm or death to people. If a manned fighter plane is destroyed, that is likely to result in harm or death to a person. However, if a robot fighter plane is shot down, no one dies. If both sides are using autonomous weapons, then the causality count would presumably be lower than in a conflict where the weapons are all manned. To use an analogy, automating war could be analogous to automating dangerous factory work.

Second, autonomous weapons can advance the existing trend in precision weapons. Just as “dumb” bombs that were dropped in massive raids gave way to laser guided bombs, autonomous weapons could provide an even greater level of precision. This would be, in part, due to the fact that there is no human crew at risk and hence the safety of the crew would no longer be a concern. For example, rather than having a manned aircraft drop a missile on target while jetting by at a high altitude, an autonomous craft could approach the target closely at a lower speed in order to ensure that the missile hits the right target.

Thus, while the proposal to ban such weapons is no doubt motivated by the best of intentions, the ban itself would not be morally justified.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Owning Human Genes

Human genome to genes

Human genome to genes (Photo credit: Wikipedia)

While it sounds a bit like science fiction, the issue of whether or not human genes can be owned has become a matter of concern. While the legal issue is interesting, my focus will be on the philosophical aspects of the matter. After all, it was once perfectly legal to own human beings—so what is legal is rather different from what is right.

Perhaps the most compelling argument for the ownership of genes is a stock consequentialist argument. If corporations cannot patent and thus profit from genes, then they will have no incentive to engage in expensive genetic research (such as developing tests for specific genes that are linked to cancer). The lack of such research will mean that numerous benefits to individuals and society will not be acquired (such as treatments for specific genetic conditions). As such, not allowing patents on human genes would be wrong.

While this argument does have considerable appeal, it can be countered by another consequentialist argument. If human genes can be patented, then this will allow corporations to take exclusive ownership of these genes, thus allowing them a monopoly. Such patents will allow them to control the allowed research conducted even at non-profit institutions such as universities (who sometimes do research for the sake of research), thus restricting the expansion of knowledge and potentially slowing down the development of treatments. This monopoly would also allow the corporation to set the pricing for relevant products or services without any competition. This is likely to result in artificially high prices which could very well deny people needed medical services or products simply because they cannot meet the artificially high prices arising from the lack of competition. As such, allowing patents on human genes would be wrong.

Naturally, this counter argument can be countered. However, the harms of allowing the ownership of human genes would seem to outweigh the benefits—at least when the general good is considered. Obviously, such ownership would be very good for the corporation that owns the patent.

In addition to the moral concerns regarding the consequences, there is also the general matter of whether it is reasonable to regard a gene as something that can be owned. Addressing this properly requires some consideration of the basis of property.

John Locke presents a fairly plausible account of property: a person owns her body and thus her labor. While everything is initially common property, a person makes something her own property by mixing her labor with it. To use a simple example, if Bill and Sally are shipwrecked on an ownerless island and Sally gathers coconuts from the trees and build a hut for herself, then the coconuts and hut are her property. If Bill wants coconuts or a hut, he’ll have to either do work or ask Sally for access to her property.

On Locke’s account, perhaps researchers could mix their labor with the gene and make it their own. Or perhaps not—I do not, for example, gain ownership of the word “word” in general because I mixed my labor with it by typing it out. I just own the work I have created in particular. That is, I own this essay, not the words making it up.

Sticking with Locke’s account, he also claims that we are owned by God because He created us. Interestingly, for folks who believe that God created the world, it would seem to follow that a corporation cannot own a human gene. After all, God is the creator of the genes and they are thus His property. As such, any attempt to patent a human gene would be an infringement on God’s property rights.

It could be countered that although God created everything, since He allows us to own the stuff He created (like land, gold, and apples), then He would be fine with people owning human genes. However, the basis for owning a gene would still seem problematic—it would be a case of someone trying to patent an invention which was invented by another person—after all, if God exists then He invented our genes, so a corporation cannot claim to have invented them. If the corporation claims to have a right to ownership because they worked hard and spent a lot of money, the obvious reply is that working hard and spending a lot of money to discover what is already owned by another would not transfer ownership. To use an analogy, if a company worked hard and spent a lot to figure out the secret formula to Coke, it would not thus be entitled to own Coca Cola’s formula.

Naturally, if there is no God, then the matter changes (unless we were created by something else, of course). In this case, the gene is not the property of a creator, but something that arose naturally. In this case, while someone can rightfully claim to be the first to discover a gene, no one could claim to be the inventor of a naturally occurring gene. As such, the idea that ownership would be confirmed by mere discovery would seem to be a rather odd one, at least in the case of a gene.

The obvious counter is that people claim ownership of land, oil, gold and other resources by discovering them. One could thus argue that genes are analogous to gold or oil: discovering them turns them into property of the discoverer. There are, of course, those who claim that the ownership of land and such is unjustified, but this concern will be set aside for the sake of the argument (but not ignored—if discovery does not confer ownership, then gene ownership would be right out in regards to natural genes).

While the analogy is appealing, the obvious reply is that when someone discovers a natural resource, she gains ownership of that specific find and not all instances of what she found. For example, when someone discovers gold, they own that gold but not gold itself. As another example, if I am the first human to stumble across naturally occurring Unobtanium on an owner-less alien world, I thus do not gain ownership of all instances of Unobtanium even if it cost me a lot of money and work to find it. However, if I artificially create it in my philosophy lab, then it would seem to be rightfully mine. As such, the researchers that found the gene could claim ownership of that particular genetic object, but not the gene in general on the grounds that they merely found it rather than created it. Also, if they had created a new artificial gene that occurs nowhere in nature, then they would have grounds for a claim of ownership—at least to the degree they created the gene.

My Amazon Author Page

Enhanced by Zemanta

Do Dogs Have Morality?

A Good Dog or a Moral Dog?

The idea that morality has its foundations in biology is enjoying considerable current popularity, although the idea is not a new one. However, the current research is certainly something to be welcomed, if only because it might give us a better understanding of our fellow animals.

Being a philosopher and a long-time pet owner, I have sometimes wondered whether my pets (and other animals) have morality. This matter was easily settled in the case of cats: they have a morality, but they are evil.  My best cats have been paragons of destruction, gladly throwing the claw into lesser beings and sweeping breakable items to the floor with feline glee. Lest anyone get the wrong idea, I really like cats—in part because they are so very evil in their own special ways. The matter of dogs and morality is rather more controversial. Given that all of ethics is controversial; this should hardly be a shock.

Being social animals that have been shaped and trained by humans for thousands of years, it would hardly be surprising that dogs exhibit behaviors that humans would regard as moral in nature. However, it is well known that people anthropomorphize their dogs and attribute to them qualities that they might not, in fact, possess. As such, this matter must be approached with due caution. To be fair, we also anthropomorphize each other and there is the classic philosophical problem of other minds—so it might be the case that neither dogs nor other people have morality because they lack minds. For the sake of the discussion I will set aside the extreme version of the problem of other minds and accept a lesser challenge. To be specific, I will attempt to make a plausible case for the notion that dogs have the faculties to possess morality.

While I will not commit to a specific morality here, I will note that for a creature to have morality it would seem to need certain mental faculties. These would seem to include cognitive abilities adequate for making moral choices and perhaps also emotional capabilities (if morality is more a matter of feeling than thinking).

While dogs are not as intelligent as humans (on average) and they do not use true language, they clearly have a fairly high degree of intelligence. This is perhaps most evident in the fact that they can be trained in very complex tasks and even in professions (such as serving as guide or police dogs). They also exhibit an exceptional understanding of human emotions and while they do not have language, they certainly can learn to understand verbal and gesture commands given by humans. Dogs also have an understanding of tokens and types. To be specific, they are quite good at recognizing individuals and also good at recognizing types of things. For example, a dog can distinguish its owner while also distinguishing humans from cats. As another example, my dogs have always been able to recognize any sort of automobile and seem to understand what they do—they are generally eager to jump aboard whether it is my pickup truck or someone else’s car. On the face of it, dogs seem to have the mental horsepower needed to engage in basic decision making.

When it comes to emotions, we have almost as much reason to believe that dogs feel and understand them as we do for humans having that ability. The main difference is that humans can talk (and lie) about how they feel; dogs can only observe and express emotions. Dogs clearly express anger, joy, fear and other emotions and seem to understand those emotions in other animals. This is shown by how dogs react to expression of emotion. For example, dogs seem to recognize when their owners are sad or angry and react accordingly. Thus, while dogs might lack all the emotional nuances of humans and the capacity to talk about them, they do seem to have the basic emotional capabilities that might be necessary for ethics.

Of course, showing that dogs have intelligence and emotions would not be enough to show that dogs have morality. What is needed is some reason to think that dogs use these capabilities to make moral decisions and engage in moral behavior.

Dogs are famous for possessing traits that are analogous to (or the same as) virtues such as loyalty, compassion and courage.  Of course, Kant recognized these traits but still claimed that dogs could not make moral judgments. As he saw it, dogs are not rational beings and do not act in accord with the law. But, roughly put, they seem to have an ersatz sort of ethics in that they can act in ways analogous to human virtue. While Kant does make an interesting case, there do seem to be some reasons to accept that dogs can engage in basic moral judgments. Naturally, since dogs do not write treatises on moral philosophy, I can only speculate on what is occurring in their minds (or brains). As noted above, there is always the risk of projecting human qualities onto dogs and, of course, they make this very easy to do.

One area that seems to have potential for showing that dogs have morality is the matter of property. While some might think that dogs regard whatever they can grab (be it food or toys) as their property, this is not always the case. While it seems true that some dogs are Hobbesian, this is also true of humans. Dogs, based on my decades of experience with them, seem to be capable of clearly grasping property. For example, my husky Isis has a large collection of toys that are her possessions. She reliably distinguishes between her toys and very similar items (such as shoes, clothing, sporting goods and so on) that do not belong to her. While I do not know for sure what happens in her mind, I do know that when I give her a toy and go through the “toy ritual” she gets it and seems to recognize that the toy is her property now. Items that are not given to her are apparently recognized as being someone else’s property and are not chewed upon or dragged outside. In the case of Isis, this extends (amazingly enough) even to food—anything handed to her or in her bowl is her food, anything else is not. Naturally, she will ask for donations, even when she could easily take the food. While other dogs have varying degrees of understanding of property and territory, they certainly seem to grasp this. Since the distinction between mine and not mine seems rather important in ethics, this suggests that dogs have some form of basic morality—at least enough to be capitalists.

Dogs, like many other animals, also have the capacity to express a willingness to trust and will engage in reprisals against other dogs that break trust. I often refer to this as “dog park justice” to other folks who are dog people.

When dogs get together in a dog park (or other setting) they will typically want to play with each other. Being social animals, dogs have various ways of signaling intent. In the case of play, they typically engage in “bows” (slapping their front paws on the ground and lowering their front while making distinctive sounds). Since dogs cannot talk, they have to “negotiate” in this manner, but the result seems similar to how humans make agreements to interact peacefully.

Interestingly, when a dog violates the rules of play (by engaging in actual violence against a playing dog) other dogs recognize this violation of trust—just as humans recognize someone who violates trust. Dogs will typically recognize a “bad dog” when it returns to the park and will avoid it, although dogs seem to be willing to forgive after a period of good behavior. An understanding of agreements and reprisals for violating them seems to show that dogs have at least a basic system of morality.

As a final point, dogs also engage in altruistic behavior—helping out other dogs, humans and even other animals. Stories of dogs risking their lives to save others from danger are common in the media and this suggests that dogs can make decisions that put themselves at risk for the well-being of others. This clearly suggests a basic canine morality and one that makes such dogs better than ethical egoists. This is why when I am asked whether I would chose to save my dog or a stranger, I would chose my dog: I know my dog is good, but statistically speaking a random stranger has probably done some bad things. Fortunately, my dog would save the stranger.

My Amazon Author Page

Enhanced by Zemanta

Four kinds of philosophical people

We’ll begin this post where I ended the last. The ideal philosopher lives up to her name by striving for wisdom. In practice, the pursuit of wisdom involves developing a sense of good judgment when tackling very hard questions. I think there are four skills involved in the achievement of good judgment: self-insight, humility, rigor, and cooperativeness.

Even so, it isn’t obvious how the philosophical ideal is supposed to model actual philosophers. Even as I was writing the last post, I had the nagging feeling that I was playing the role of publicist for philosophy. A critic might say that I set out to talk about how philosophers were people, but only ended up stating some immodest proposals about the Platonic ideal of the philosopher. The critic might ask: Why should we think that it has any pull on real philosophers? Do the best professional philosophers really conceive of themselves in this way? If I have no serious answer to these questions, then I have done nothing more than indulged in a bit of cheerleading on behalf of my beloved discipline. So I want to start to address that accusation by looking at the reputations of real philosophers.

Each individual philosopher will have their own ideas about which virtues are worth investing in and which are worth disregarding. Even the best working philosophers end up neglecting some of the virtues over the others: e.g., some philosophers might find it relatively less important to write in order to achieve consensus among their peers, and instead put accent on virtues like self-insight, humility, and rigour. Hence, we should expect philosophical genius to be correlated with predictable quirks of character which can be described using the ‘four virtues’ model. And if that is true, then we should be able to see how major figures in the history of philosophy measure up to the philosophical ideal. If the greatest philosophers can be described in light of the ideal, we should be able to say we’ve learned something about the philosophers as people.

And then I shall sing to the Austrian mountains in my best Julie Andrews vibrato: “public relations, this is not“.

—-

In my experience, many skilled philosophers who work in the Anglo-American tradition will tend to have a feverish streak. They will tend to find a research program which conforms with their intuitions (some of which may be treated as “foundational” or givens), and then hold onto that program for dear life. This kind of philosopher will change her mind only on rare occasions, and even then only on minor quibbles that do not threaten her central programme. We might call this kind of philosopher a “programmist” or “anti-skeptic, since the programmist downplays the importance of humility, and is more interested in characterizing herself in terms of the other virtues like philosophical rigour.

You could name a great many philosophers who seem to hold this character. Patricia and Paul Churchland come to mind: both have long held the view that the progress of neuroscience will require the radical reformation of our folk psychological vocabulary. However, when I try to think of a modern exemplar of this tradition, I tend to think of W.V.O. Quine, who held fast to most of his doctrinal commitments throughout his lifetime: his epistemological naturalism and holism, to take two examples. This is just to say that Quine thought that the interesting metaphysical questions were answerable by science. Refutation of the deeper forms of skepticism was not very high on Quine’s agenda; if there is a Cartesian demon, he waits in vain for the naturalist’s attention. The most attractive spin on the programmist’s way of doing things is by saying they have raised philosophy to the level of a craft, if not a science.

—-

Programmists are common among philosophers today. But if I were to take you into a time machine and introduced you to the elder philosophers, then it would be easy to lose all sense of how the moderns compare with their predecessors. The first philosophers lived in a world where science was young, if not absent altogether; there was no end of mystery to how the universe got on. For many of them, there was no denying that skepticism deserved a place at the table. From what we can tell from what they left behind, many ancient philosophers (save Aristotle and Pythagoras) did not possess the quality that we now think of as analytic rigour. The focus was, instead, of developing the right kind of life, and then — well, living it.

We might think of this as a wholly different approach to being a philosopher than our modern friend the programmist. These philosophers were self-confident and autonomous, yet had plenty to say to the skeptic. For lack of a better term, we might call this sort of philosopher a “guru” or “informalist“. The informalist trudges forward, not necessarily with the light of reason and explicit argument, but of insight and association, often expressed in aphorisms. To modern professional philosophers and academic puzzle-solvers, the guru may seem like a specialist in woo and mysticism, a peddler of non-sequiturs. Many an undergraduate in philosophy will aspire to be a guru, and endure the scorn from their peers  (often, rightly administered).

Be that as it may, some gurus end up having a vital place in the history of modern philosophy. Whenever I think of the ‘guru’ type of philosopher, I tend to think of Frederich Nietzsche — and I feel justified in saying that in part because I guess that he would have accepted the title. For Nietzsche, insight was the single most important feature of the philosopher, and the single trait which he felt was altogether lacking in his peers.

Nietzsche was a man of passion, which is the reason why he is so easily misunderstood. Also, for a variety of reasons, Nietzsche was a man who suffered from intense loneliness. (In all likelihood, the fact that he was a rampant misogynist didn’t help in that department.) But he was also a preacher’s son, his rhetoric electric, his sermons brimming with insight and even weird lapses into latent self-deprecation. Moreover, he is a man who wrote in order to be read, and who was excited by the promise of new philosophers coming out to replace old canons. In the long run, he got what he wanted; as Walter Kaufman wrote, “Nietzsche is one of the few philosophers since Plato whom large numbers of intelligent people read for pleasure”.

—-

“He has the pride of Lucifer.” — Russell on Wittgenstein

Some philosophers prefer to strike out on their own, paving an intellectual path by way of sheer stamina and force of will. We might call them the “lone wolves“. The lone wolf will often appear as a kind of contrarian with a distinctive personality. However the lone wolf is set apart from a mere devil’s advocate by virtue of the fact that she needs to pump unusually deep wellsprings of creativity and cleverness into her craft. Because she needs to strike off alone, the wolf has to be prepared to chew bullets for breakfast: there is no controversial position she is incapable of endorsing, so long as those positions qualify as valid moves in the game of giving and taking of reasons. She is out for adventure, to prove herself capable of working on her own. More than anything else, the lone wolf despises philosophical yes-men and yes-women. She has no time for the people who are satisfied by conventional wisdom — people who revere the ongoing dialectic as a sacred activity, a Great Conversation between the ages. The lone wolf says: the hell with this! These are problems, and problems are meant to be solved.

Ludwig Wittgenstein was a lone wolf, in the sense that nobody could quite refute Wittgenstein except for Wittgenstein. The philosophical monograph which made him famous, the Tractatus, began with an admission of idiosyncracy: “Perhaps this book will be understood only by someone who has himself already had the thoughts that are expressed in it—or at least similar thoughts.—So it is not a textbook.—Its purpose would be achieved if it gave pleasure to one person who read and understood it.” He was a private man, who published very little while alive, and whose positions were sometimes unclear even to his students. He was an intense man, reputed to have wielded a hot poker at one of his contemporaries. And he had an oracular style of writing — the Tractatus resembles an overlong Powerpoint presentation, while the Investigations was a free-wheeling screed. These qualities conspired to give the man himself an almost mythical quality. As Ernest Nagel wrote in 1936 (quoting a Viennese friend): “in certain circles the existence of Wittgenstein is debated with as much ingenuity as the historicity of Christ has been disputed in others”.

Wittgenstein’s work has lasting significance. His anti-private language argument is a genuine philosophical innovation, and widely celebrated as such. As such, he is the kind of philosopher that everybody has to know at least something about. But none of this came about by the power of idiosyncrasy alone. Wittgenstein achieved notoriety by demonstrating that he had a penetrating ability to go about the whole game of giving and taking reasons.

—-

“Synthesizers are necessarily dedicated to a vision of an overarching truth, and display a generosity of spirit towards at least wide swaths of the intellectual community. Each contributes partial views of reality, Aristotle emphasizes; so does Plotinus, and Proclus even more widely…” Randall Collins, The Sociology of Philosophies

Some philosophers are skilled at combining the positions and ideas that are alive in the ongoing conversation and weaving them into an overall picture. This is a kind of philosopher that we might call the “syncretist“. Much like the lone wolf, the syncretist despises unchallenged dogmatism; but unlike the lone wolf, this is not because she enjoys the prospect of throwing down the gauntlet. Rather, the syncretist enjoys the murmur of people getting along, engaged in a productive conversation. Hence, the syncretist is driven to reconcile opposing doctrines, so long as those doctrines are plausible. When she is at her best, the syncretist is able to generate a powerful synthesis out of many different puzzle pieces, allowing the conversation to become both more abstract without also becoming unintelligible. They do not just say, “Let a thousand flowers bloom” — instead, they demonstrate how the blooming of one flower only happens when in the company of others.

The only philosopher that I have met who absolutely exemplifies the spirit of the syncretist, and persuasively presents the syncretist as a virtuous standpoint in philosophy, is the Stanford philosopher Helen Longino. In my view, her book The Fate of Knowledge is a revelation.

A more infamous [example] of the syncretist, however, is Jurgen Habermas. Habermas is an under-appreciated philosopher, a figure who is widely neglected in Anglo-American philosophy departments and (for a time) was widely scorned in certain parts of Europe. True, Habermas is a difficult philosopher to read. And, in fairness, one sometimes gets the sense that his stuff is a bit too ecumenical to be motivated on its own terms. But part of what makes Habermas close to an ideal philosopher is that he is an intellectual who has read just about everything — he has partaken in wider conversations, attempting to reconcile the analytic tradition with themes that stretch far beyond its remit. Habermas also has a prodigious output: he has written on a countless variety of subjects, including speech act theory, the ethics of assertion, political legitimation, Kohlberg’s stages of moral development, collective action, critical theory and the theory of ideology, social identity, normativity, truth, justification, civilization, argumentation theory, and doubtless many other things. If a dozen people carved up his bibliography and each staked a claim to part of it, you’d end up with a dozen successful academic careers.

For some intellectuals, syncretism is hard to digest. Just as both mothers in the court of King Solomon might have felt equally betrayed, the unwilling subjects of the syncretist’s analysis may respond with ill tempers. In particular, the syncretist grates on the nerves of those who aspire to achieve the status of lone wolf intellectuals. Take two examples, mentioned by Dr. Finlayson (Sussex). On the one hand, Marxist intellectuals will sometimes like to accuse Habermas of “selling out” — for instance, because Habermas has abandoned the usual rhythms of dialectical philosophy by trying his hand at analytic philosophy. On the other hand, those in analytic philosophy are not always very happy to recognize Habermas as a precursor to the shape of analytic philosophy today. John Searle explains in an uncompromising review: “Habermas has no theory of social ontology. He has something he calls the theory of communicative action. He says that the “purpose” of language is communicative action. This is wrong. The purpose of language is to perform speech acts. His concept of communicative action is to reach agreement by rational discussion. It has a certain irony, because Habermas grew up in the Third Reich, in which there was another theory: the “leadership principle”.” I suspect that Searle got Habermas wrong, but nobody said life as a philosopher was easy.

—-

Everything I’ve said above is a cartoon sketch of some philosophical archetypes. It is worth noting, of course, that none of the philosophers I have mentioned will fit into the neat little boxes I have made for them. The vagaries of the human personality resist being reduced to archetypes. Even in the above, I cheated a little: Nietzsche is arguably as much a lone wolf as he is a guru. I also don’t mean to suggest that all professional philosophers will fit into anything quite like these categories. Some are by reputation much too close to the philosophical ideal to fit into an archetype. (Hilary Putnam comes to mind.) And other professional philosophers are nowhere close to the ideal — there is no shortage of philosophers behaving badly. I mean only to say something about how you can use the ‘four virtues’ model of wisdom to say something interesting about philosophers themselves.

(BLS Nelson is the author of this article. For more information about him, click here.)

Seeing philosophers as people

“It is the profession of philosophers,” David K. Lewis writes, “to question platitudes that others accept without thinking twice.” He adds that this is a dangerous profession, since “philosophers are more easily discredited than platitudes.” As it turns out, in addition to being a brilliant philosopher, Lewis was a master of understatement.

For some unwary souls, conversation with the philosopher can feel like an attack or assault. The philosopher’s favorite hobby is critical discussion, and this is almost guaranteed to be — shall we say — annoying. (Indeed, I am tempted to say that if it weren’t annoying, it would be a sign that something has gone wrong — that the conversation is becoming stale and irrelevant.) Ordinary folk, on the other hand, generally try to do what it takes to get along with others, which means being polite and trying to smooth over conflict, and it may seem as though the philosopher has terrible manners for asking too many uncomfortable questions. And the ordinary folk are sometimes quite right. Indeed, sometimes what passes for philosophy really is just a trivial bloodsport, a pointless game of denigration and insult with no productive bottom line that is disguised as disinterested inquiry (as illustrated by this hilarious spoof article).

The estrangement between philosophers and non-philosophers might owe to the fact that there is no strong consensus about what it means to be a philosopher. For one thing, philosophers are under external pressure to tell the world just who the hell they think they are. As funding is increasingly being diverted away from the humanities, the self-identity of the philosopher has started to be put under increased scrutiny. For another thing, the discipline is suffering from some internal strain. Analytic philosophy once had a strong mission statement: to clear up conceptual confusions by revealing how people were being fooled by grammar into committing to absurd theses. Unfortunately, over the past few decades the analytic philosopher’s confidence in their ability to do conceptual analysis has suffered. The tried and true philosophical reliance upon aprioristic reasoning has fallen increasingly out of favor, as greater awareness of insights from psychology and the social sciences have begun to undermine the credibility of distinctively philosophical inquiry. The harder that the social sciences encroach upon aprioristic terrain, the harder that rear-guard philosophers try to push back, and it is not at all obvious that they are winning the fight. It is against this background that Livengood et al. confess: “Many signs point to an identity crisis in contemporary philosophy. As a group, we philosophers are puzzled and conflicted about what exactly philosophy is.”

I don’t really think that philosophers should worry very much about their sense of identity, because there is a pretty straightforward way of characterizing the ideal philosopher. But in order to see why, it’s worth taking the time to think about what it means to be a philosopher: why it’s worth it, how non-philosophers can benefit from whatever the philosopher is up to, and how philosophers can figure out how to do their business better. We should start thinking more often about what the philosophical personality looks like, so that everyone can relate to philosophers as people.

A not-awful definition of philosophy could begin thus: “All philosophers are lovers — they are lovers of wisdom”. This gives due credit to the etymology of philosophy (which, of course, is commonly translated as ‘love of wisdom’.) But it also sounds a bit perverse. Indeed, when little Johnny comes back from Oxford after a year of study philosophy, and tells Mom that he has fallen in love with an abstract noun, one ought not be surprised if Mom frets for Johnny. So what I mean needs to be unpacked a little.

In the abstract, I would argue that wisdom involves at least four virtues: insight, prudence, reason, and fair-mindedness. In practice, I think, wisdom involves a degree of self-insight (the ability to articulate and weigh one’s intuitions), intellectual humility (the ability to actively poke at and potentially abandon those intuitions), intellectual rigor (the ability to reason through the implications of what one thinks), and cooperatively engaged (the ability to communicate one’s own convictions in a cooperative and illuminating way). That is the sort of person that the philosopher ought to be.

This is not to suggest that this ideal of the philosopher is one that every philosopher in every time in history would endorse. To choose a recent example, one prominent philosopher argued (tongue-in-cheek, I think) that contemporary philosophers just aren’t like that. He argues: “What is literally true is that we philosophers value knowledge, like our colleagues in other departments. Do we love knowledge? One might reasonably demur from such an emotive description.” Evidently, the working assumption is that the reader learning this information is better served if they lower their expectations of philosophy, instead of lowering their expectations of the people working in philosophy departments. I cannot think of any way to reasonably motivate this assumption.

But even if we thought that somehow the quoted author had it right, the history of the future would show him wrong. The greatest luminaries in philosophy, the great wise and dead, have a tendency to crowd out the loud and supercilious living. Their ability to command our attention owes to the fact that philosophical luminaries have always filled an essential cultural need: namely, they have helped to reinvent the idea of what it means to come to maturity, by striving to be insightful, humble, rigorous, and engaged. ‘The love of wisdom’ is not [just] a roundabout way of speaking about valuing knowledge — it is a way of talking about trying to be better as people. Philosophers ask us be at our best when they ask us to study wisdom for its own sake, because philosophy is as essential to adulthood as preschool is to the young.

This, I think, is a not-totally-unsatisfying way of looking at the ideal philosopher. But there is a lot missing. It doesn’t really capture the kind of energy that goes into doing philosophy, the nerdy thrill that goes into tackling the biggest questions you can think of. I have not given you any reason to think that the ideal of wisdom tells us anything about what real philosophers are like. I’m saving that for the next post.

[Substantial edit for clarity on Aug. 21]

(BLS Nelson is the author of this article. For more information about him, click here.)

Pro-Life, Pro-Environment

Human fetus, age unknown

Image via Wikipedia

Here in the States we are going through the seemingly endless warm up for our 2012 presidential election. President Obama is the candidate of the Democrats and the Republicans are trying to sort out who will be their person.  The Republican candidates for being the presidential candidate are doing their best to win the hearts and minds of the folks who will anoint one of them.

In order to do this, a candidate must win over the folks who are focused on economic matters (mainly pushing for low taxes and less regulation) and those who are focused on what they regard as moral issues (pushing against abortion, same sex marriage and so on). The need to appeal to these views has caused most of the candidates to adopt the pro-life (anti-abortion) stance as well as to express a commitment to eliminating regulation. Some of the candidates have gone so far as to claim they will eliminate the EPA (Environmental Protection Agency) on the grounds that regulations hurt the job creators.

On the face of it, these seems to be no tension between being pro-life and against government regulation of the sort imposed via the EPA.  A person could argue that since abortion is wrong, it is acceptable for the government to deny women the freedom to have abortions. The same person could, quite consistently it seems, then argue that the state should take a pro-choice stance towards business in terms of regulation, especially environmental regulation. However, if one digs a bit deeper, it would seem that there is a potential tension here.

In the States, the stock pro-life argument is that the act of abortion is an act of murder: innocent people are being killed. There are, of course, variations on this line of reasoning. However, the usual moral arguments are based on the notion that harm is being done to an innocent being.  When people counter with an appeal to the rights or needs of the mother, the stock reply is that these are overridden in this situation. That is, avoiding harm to the fetus (or pre-fetus) is generally more important than avoiding harm to the mother. In some cases people take this to be an absolute in that they regard abortion as never allowable. Some do allow exceptions in the case of medical necessity, rape or incest.  There are, of course, also religious arguments-but those are best discussed in another context.

If this line of reasoning is taken seriously, and I think that it should, then a person who is pro-life on these grounds would seem to be committed to extending this moral concern for life beyond the womb. Unless, of course, there is a moral change that occurs after birth that create a relevant difference that removes the need for moral concern. This, however, would seem unlikely (at least in this direction, namely from being a entity worthy of moral concern to being an entity who does not matter).

It is at this point that the matter of environmental concerns can be brought into play. Shortly before writing this I was reading an article about the environmental dangers children are exposed to, primarily in schools. These hazards include the usual suspects: lead, mercury, pesticides, arsenic, air pollution, mold, asbestos, radon, BPA, polychlorinated biphenyls, and other such things.

Currently, children are regularly exposed to a witches brew of human made chemicals and substances that have been well established as being harmful to human beings and especially harmful to children. They are also exposed to naturally occurring substances by the actions of human beings. For example, burning coal and oil release naturally occurring mercury into the air. As another example, people use naturally occurring lead and asbestos in construction. As noted above, it is well established that these substances are harmful to humans and especially harmful to children.

If someone hold the pro-life position and believes that abortion should be regulated by the state because of the harm being done, then it would thus seem to follow that they would also need to be committed to the regulation of harmful chemicals and substances, even those produced and created by businesses. After all, if the principle that warrants regulating abortion is based on the harm being done to the fetus/pre-fetus, then the same line of reasoning would also extend to the harm being done to children and adults.

If someone were to counter by saying that they are only morally concerned with the fetus/pre-fetus, then the obvious reply is that these entities are even more impacted by exposure to such chemicals and substances. As such, they would also seem to committed to accepting regulation of the environment on the same grounds that they argue for regulation of the womb.

It might be countered that these substances generally do not kill the fetus/pre-fetus or children  but rather cause defects. As such, a person could be against killing (and hence anti-abortion) but also be against regulation on the grounds that they find birth defects, retarded development and so on to be acceptable. That is, killing is not acceptable but maiming and crippling are tolerable.

This would, interestingly enough, be a potentially viable position. However, it does seem somewhat problematic for a person to be morally outraged at abortion while being willing to tolerate maiming and crippling.

It might also be argued that businesses should be freed from regulation on the utilitarian grounds that the jobs and profits created will outweigh the environmental harms being done. That is, in return for X jobs and Y profits, we can morally tolerate Z levels of contamination, pollution, birth defects, illness and so on. This is, of course, a viable option.

However, if this approach is acceptable for regulating the environment, then it would seem to also be acceptable for regulating the womb. That is, if a utilitarian approach is taken to the environment, then the same would seem to also be suitable for abortion. It would seem that if we can morally tolerate the harms resulting from a lack of regulation of the environment, then we could also tolerate the harms resulting from abortion.

Thus it would seem that a person who is pro-life and favors regulating the womb the grounds that abortion harms the innocent, then that person should also be for regulating the environment on the grounds that pollution and contamination also harm the innocent.

Enhanced by Zemanta

A World Less Violent?

Violence!

Image by Rickydavid via Flickr

Although the Libyan and Iraq wars recently ended, the world still seems like a violent place. After all, the twenty four hour news cycles are awash with stories of crime, war, riots and other violent activities. However, Steven Pinker contends, in his The Better Angels of Our nature: Why Violence Has Declined that we are living in a time in which violence is at an all time low.

Pinker bases his claim on statistical data. For example, the records of 14th century Oxford reveal 110 homicides per 100,000 people while the middle of the 20th century saw London with a murder rate of less than 1 person per 100,000. As another example, even the 20th century (which saw two world wars and multitudes of lesser wars) killed .7% of the population (3% if all war connected deaths are counted).

Not surprisingly, people have pointed to the fact that modern wars have killed millions of people and that the number of people who die violently is fairly large. Pinker, not surprisingly, makes the obvious reply: the number of violent deaths is higher but the percentage is far lower-mainly because there are so many more people today relative to the past.

As the title suggests, Pinker attributes the change, in part, to people being better at impulse control, considering consequences, and also considering others. This view runs contrary to the idea that people today are not very good at such things-but perhaps people are generally better than people in the past. Pinker does also acknowledge that states have far more control now than in the past, which tends to reduce crime.

While Pinker makes a good case, it is also reasonable to consider other explanations that can be added to the mix.

In the case of war, improved medicines and improved weapons have reduced the number of deaths. Wounds that would have been fatal in the past can often be handled by battlefield medicine, thus lower the percentage of soldiers who die as the result of combat.  Weapon technology also has a significant impact. Improvements in defensive technology mean that a lower percentage of combatants are killed and improvements in weapon accuracy mean that less non-combatants are killed. The newer technology has also changed the nature of warfare in terms of civilian involvement. With some notable exceptions, siege warfare is largely a thing of the past because of the changes in technology. So, instead of starving a city into surrendering, soldiers now just take the city using combined arms.

The improved technology also means that modern soldiers are far more effective that soldiers in the past which reduces the percentage of the population that needs to be involved in combat, thus lowering the percentage of people killed.

There is also the fact that the nature of competition between human groups has changed. At one time the conflict was directly over land and resources and these conflicts were settled with violence. While this still occurs, we now have far broader avenues of competition, such as economics, sports, and so on. As such, people might be just as violently inclined as ever, only now we have far more avenues into which to channel that violence. So, for example, back in the day an ambitious man might have as his main option being a noble and achieving his ends by violence. Today a person with ambitions of conquest might start a business or waste away his life in computer games.

In the case of violent crime, people are more distracted, more medicated, and more separated than in the past. This would tend to reduce violent crimes, at least in terms of the percentages.

A rather interesting factor to consider is natural selection. Societies tend to respond to violent crimes with violence, often killing such criminals. Wars also tend to kill the violent. As such, centuries of war and violent crime might be performing natural selection on the human species-the more violent humans would tend to be killed, thus leaving those less prone to crime and violence to reproduce more. Crudely put, perhaps we are killing our way towards peace.

Enhanced by Zemanta

Should Zygotes be Considered People?

Oocyte viewed with HMC

Image via Wikipedia

In the United States certain Republicans have been proposing legislation that would define a zygote as a legal person. The most recent instance occurred in Mississippi when voters were given the chance to approve or reject the following: “the term ‘person’ or ‘persons’ shall include every human being from the moment of fertilization, cloning, or the functional equivalent thereof.” The voters rejected this, but there are other similar attempts planned or actually in the works. There are, as far as I know, no serious attempts to push person hood back before fertilization (that is, to establish eggs and sperm as being persons).

Since this is a matter of law, whether or not a zygote is a legal person or not depends on whether such a law is passed and then passes legal muster. Given that corporations are legally persons, it does not seem all that odd to have zygotes as legal people. Or whales. Or forests. There is, after all, no requirement that legal personhood be established by considered philosophical argumentation.

From a philosophical perspective, I would be inclined to stick with what seems to be the general view: zygotes are not persons. I do accept the obvious: a zygote is alive (as is an amoeba or any cell in my body), a zygote has full human DNA (as does almost any cell in my body), and a zygote has the potential to be an important part of a causal chain that leads to a human being (as does any cell in my body that could be used in cloning). However, these qualities of a zygote do not seem to be sufficient to establish it as a person. After all, the relevant  qualities of the zygote seem to be duplicated by some of the cells in our bodies and it would be absurd to regard each of us as a collective of persons.

But, as I noted, the legal matter is quite distinct from the philosophical-after all, zygotes (or anything) could become legal persons with the appropriate legislation. This leads to a point well worth considering, namely the consequences of such a law.

The most obvious would be that abortion and certain forms of birth control (such as IUDs and the “morning after” pill) would certainly seem to be legally murder. After all, they would involve the intentional (and possibly pre-meditated) murder of a legal person. This is, of course, one of the main intended consequences of such attempts. However, there would seem to be other consequences as well.

One rather odd consequence would be in regards to occupancy laws and regulations. These tend to be set by the number of persons present and unless laws are written to allow exemptions for zygotes, etc. then this would be a point of legal concern. This seems absurd, which is, of course, the point.

Another potential consequence is the matter of deductions for dependents. If a zygote is a person, then a frozen zygote is still a person and presumably the child of the parent(s). This would, unless specific laws are written to prevent this, seem to allow people to claim frozen zygotes as dependent children and thus take a tax deduction for each one. While the cost of creating and freezing zygotes would be a factor, the tax deductions would seem to be well worth it. Perhaps this is the secret agenda behind such legislation: people could avoid taxes by having enough zygotes in the freezer.

Of course a “zygotes are people” law might also entail that it would be illegal to freeze zygotes on the grounds that they would be confined or imprisoned without consent or due process. Naturally, laws would need to be written for this and they would also need to be worded so as to avoid making “imprisoning” a zygote in the womb a crime. There is also the matter of in vitro fertilization and whether or not certain processes would thus be outlawed by the “zygotes are people” law.  After all, some of the zygotes created do not survive. If these zygotes are people, IVF could be regarded as involving, if not murder, at least some sort  homicide or zygoteslaughter. Of course, outlawing such practices seems to be one of the intended consequences of these proposed laws.

Another point of concern is the matter of death certificates. After all, the death of a person requires a certificate and the usual legal proceedings. If a zygote were to be a legal person, then it would seem to follow that if a zygote died, then the death would need to be properly recorded and perhaps investigated to determine if a crime were committed. Naturally, specific laws could be written regarding various circumstances (for example, should women have to report every zygote that fails to implant-thus resulting in the death of a person). Perhaps the state would need to set up womb cameras or some other detector to monitor the creation of these new people so as to ensure that no death of a person goes unreported.

One rather interesting consequence is that such a law might set the precedent that any cell that could be cloned would count as a person (after all, as argued above, it would seem to share the relevant qualities of a zygote and the law in question mentioned cloning or any functional equivalent). This would have some rather bizarre consequences.

Enhanced by Zemanta

Argubot

Robot Monster... ahora resulta que los robots ...
Image by Javier Piragauta via Flickr

One interesting phenomenon is that groups often adopt a set of stock views and arguments that are almost mechanically deployed to defend the views. In many cases, the pattern of responses seems almost robotic-in many “discussions” I can predict what stock arguments will be deployed next.

I have even found that if I can lure someone off their pre-established talking points, then they are often quite at a loss as to what to say next. This, I suspect, is a sign that a person does not really have his/her own arguments but is merely putting forth established dogmarguments (dogmatic arguments).

Apparently someone else noticed this phenomenon-specifically in the context of global warming arguments and decided to create his own argubot. Nigel Leck created a script that searches Twitter for key phrases associated with stock arguments against the view that humans have caused global warming. When the argubot finds a foe it then engages by sending a response tweet containing a counter to the argument (and relevant links).

In some cases the target of the argubot does not realize that s/he is arguing with a script and not a person. The argubot is set up to respond with a variety of “prefabricated” arguments when the target repeats an argument, thus helping to create that impression. The argubot also has a repertoire  that goes beyond global warming. For example, it is stocked with arguments about religion. This also allows it to maintain the impression that it is a person.

While the argubot is reasonably sophisticated, it is not quite up to the Turing test. For example, it cannot discern when people are joking. While it can fool people into thinking they are arguing with a person, it is important to note that the debate takes place in the context of Twitter.  As such, each tweet is limited to 140 characters. This makes it much easier for a argubot to pass itself off as a person.  Also worth considering is the fact that people tend to have rather low expectations for the contents of tweets which makes it much easier for an argubot to masquerade as a person. However, it is probably just a matter of time before a bot passes the Tweeter Test (being able to properly pass itself off as person in the context of twitter).

What I find most interesting about the argubot is not that it can often pass as a human tweeter, but that the argumentative process with its targets can be automated in this manner. This inclines me to think that the people who the argubot are arguing with are also, in effect, argubots. That is, they are also “running scripts” and presenting pre-fabricated arguments they have acquired from others. As such, it could be seen as  a case of a computer based argubot arguing against biological argubots with both sides relying on scripts and data provided by others.

It would be interesting to see the results if someone wrote another argubot to engage the current argubot in debate. Perhaps in the future argumentation will be left to the argubots and the silicon tower will replace the ivory tower. Then again, this would probably put me out of work.

One final point worth considering is the ethics of  the argubot at hand.

One concern is that it seems deceptive: it creates the impression that the target is engaged in a conversation with a person when s/he is actually just engaged with a script. Of course, the argubot does not state that it is a person nor does it make use of deception to harm the target. Given its purpose, to argue about global warming, it seems to be irrelevant whether the arguing is done by a person or a script. This contrasts with cases in which it does matter, such as a chatbot designed to trick someone into thinking that another person is romantically interested in them or to otherwise engage with the intent to deceive. As such, the argubot does not seem to be unethical in regards to fact that people might think it is a person.

Another concern is that the argubot seeks out targets and engages them (an argumentative Terminator or Berserker). This, some might claim, could be seen as a form of spamming or harassment.

As far as the spamming goes, the argubot does not deploy what would intuitively be considered spam in terms of its content. After all, it is not trying to sell a product, etc. However, it might be argued that it is sending out unsolicited bulk tweets, which might thus be regarded as spam.  Spamming is rather well established as immoral (if an argument is wanted, read “Evil Spam” in my book What Don’t You Know? ) and if the argubot is spamming, then this would be unethical.

While the argubot might seem like a spambot, one way to defend it against this charge is to note that the argubot provides what are mostly relevant responses that are comparable to what a human would legitimately  send in response to a tweet. Thus, while it is automated, it is arguing rather than spamming. This seems to be an important distinction. After all, the argubot does not try to sell male enhancement, scam people, or get people to download a virus. Rather, it responds to arguments that can be seen as inviting a response-be it from a person or a script.

In regards to the harassment charge, the argubot does not seem to be engaged in what could be legitimately considered harassment. First, the content does not seem to constitute harassment.  Second, the context of the “debate” is a public forum (Twitter) that explicitly allows such interactions to take place-whether they involve just humans or humans and bots.

Obviously, an argubot could be written that would actually be spamming or engaged in harassment. However, this argubot does not seem to cross the ethical line in regards to this behavior.

I suspect that we will see more argubots soon.

Enhanced by Zemanta

Human, Really?

Kuhn used the duck-rabbit optical illusion to ...
Image via Wikipedia

Sharon Begley recently wrote an interesting article, “What’s Really Human?” In this piece, she presents her concern that American psychologists have been making hasty generalizations over the years. To be more specific, she is concerned that such researchers have been extending the results gleaned from studies of undergraduates at American universities to the entire human race. For example, findings about what college students think about self image are extended to all of humanity.

She notes that some researchers have begun to question this approach and have contended that American undergraduates are not adequate representatives of the entire human race in terms of psychology.

In one example, she considers the optical illusion involving two line segments. Although the segments have the same length, one has arrows  on the ends pointing outward and the other has the arrows pointing inward. To most American undergraduates, the one with the inward pointing arrows looks longer.  But when the San of the Kalahari, African hunter-gatherers, look at the lines, they judge them to be the same length. This is taken to reflect the differing conditions.

This result is, of course, hardly surprising. After all, people who live in different conditions will tend to have different perceptual skill sets.

Begley’s second example involves the “ultimatum game” that is typical of the tests that are intended to reveal truths about human nature via games played with money. The gist of the game is that there are two players, A and B. In this game, the experimenter gives player A $10. A then must decide how much to offer B. If B accepts the deal, they both get the money. If B rejects the deal, both leave empty handed.

When undergraduates in the States play, player A will typically offer $4-5 while those playing B will most often refuse anything below $3. This is taken as evidence that humans have evolved a sense of justice that leads us to make fair offers and also to punish unfair ones-even when doing so means a loss. According to the theorists, humans do this because we evolved in small tribal societies and social cohesion and preventing freeloaders (or free riders as they are sometimes called) from getting away with their freeloading.

As Begley points out, when “people from small, nonindustrial societies, such as the Hadza foragers of Tanzania, offer about $2.50 to the other player—who accepts it. A “universal” sense of fairness and willingness to punish injustice may instead be a vestige of living in WEIRD, market economies.”

While this does provide some evidence for Begley’s view, it does seem rather weak. The difference between the Americans and the Hadza does not seem to be one of kind (that is, Americans are motived by fairness and the Hadza are not). Rather, it seems plausible to see this is terms of quantity. After all, Americans refuse anything below $3 while the Hazda’s refusal level seems to be only 50 cents less. This difference could be explained in terms not of culture but of relative affluence. After all, to a typical American undergrad, it is no big deal to forgo $3. However, someone who has far less (as is probably the case with the Hazda) would probably be willing to settle for less.

To use an analogy, imagine playing a comparable game using food instead of money. If I had recently eaten and knew I had a meal waiting at home, I would be more inclined to punish a small offer than accept it. After all, I have nothing to lose by doing so and would gain the satisfaction of denying my “opponent” her prize. However, if we were both very hungry and I knew that my cupboards were bare, then I would be much more inclined to accept a smaller offer on the principle that some food is better than none.

Naturally, cultural factors could also play a role in determining what is fair or not. After all, if A is given the money, B might regard this as A’s property and that A is being generous in sharing anything. This would show that culture is a factor, but this is hardly a shock. The idea of a universal human nature is quite consistent with it being modified by specific conditions. After all, individual behavior is modified by such conditions. To use an obvious example, my level of generosity depends on the specifics of the situation such as the who, why, when and so on.

There is also the broader question of whether such money games actually reveal truths about justice and fairness. This topic goes beyond the scope of this brief essay, however.

Begley finishes her article by noting that “the list of universals-that-aren’t kept growing.” That is, allegedly universal ways of thinking and behaving have been found to not be so universal after all.

This shows that contemporary psychology is discovering what Herodotus noted thousands of years ago, namely that “custom is king” and what the Sophists argued for, namely relativism. Later thinkers, such as Locke and other empiricists, were also critical of the idea of universal (specifically innate) ideas. In contrast, thinkers such as Descartes and Leibniz argued for the idea of universal (specifically innate) ideas.

I am not claiming that these thinkers are right (or wrong), but it certainly interesting to see that these alleged “new discoveries” in psychology are actually very, very old news. What seems to be happening in this cutting edge psychology is a return to the rationalist and empiricist battles over the innate content of the mind (or lack thereof).

Enhanced by Zemanta