Tag Archives: Social Sciences

Concentration of Wealth

WEALTH IN THE USA

WEALTH IN THE USA (Photo credit: er00mb0b)

In early 2014 Oxfam International released some interesting statistics regarding the distribution of the world’s wealth. Here are some of the highlights:

 

  •          1% of the population owns about 50% of the wealth.
  •          This 1% owns $110 trillion.
  •          $110 trillion is 65 times the wealth owned by the bottom (economically) 50% of people.
  •          The bottom 50% owns the same amount of wealth as the top 85 wealthiest people.
  •          In the United States, the top 1% received 95% of the growth since 2009 while the 90% lost wealth.

That there is an extremely unequal distribution of wealth is hardly surprising. In my very first political science class, I learned that every substantial human society has had a pyramid shaped distribution of wealth. Inevitably, the small population of the top owns a disproportionally large amount of wealth while the large population at the bottom owns a disproportionally small amount of wealth. This pattern holds whether the society is a monarchy, dictatorship, communist state or democracy.

From a moral standpoint, one important question is whether or not such a distribution is just. While some might be tempted to regard any disproportional distribution as unjust, this would be an error. After all, the justness of a distribution is not a simple matter of numbers. To use an easy example, consider the distribution of running trophies. Obviously enough, there is a very unequal distribution of such awards. First, almost all people who have them will be (or will have been) runners. As such, most people will not have even one trophy. Second, even among the population of runners there will be a disproportionate distribution: there will be a fairly small percentage of runners who have a large percentage of the trophies. As such, there is a concentration of running trophies. However, this is not unjust: the competition for such trophies is open, the competition is generally fair, and a trophy is generally earned by running well. Roughly put, the better runners will have the most trophies and they will be a small percentage of the runner population. Because of the nature of the competition, I have no issue with this. There is, of course, also the biasing factor that I have won a lot of trophies.

Those who defend the unequal distribution of wealth often endeavor to claim that the competition for wealth is analogous to the situation I presented for running trophies: the competition is open, the competition is fair, and the reward is justly earned by competing well. While this is a plausible approach to justifying the massive inequality, the obvious problem is that these claims are not true.

Those who start out in a wealthy family might not make their money by inheritance, but they enjoy a significant starting advantage over those born into less affluent families. While it is true that a few people rise from humble origins to great financial success, those stories are so impressive because pf the difficulty of doing so and the small number of people who achieve such great success.

There is also the obvious fact that those who hold wealth use their influence to ensure that the political and social system favors the wealthy. While this might not be aimed at keeping people from becoming wealthy, the general impact is that existing wealth is favored and defended against attempts to “intrude” into the top of the pyramid. Naturally, people will point to those who succeeded fantastically despite this system. But, once again, these stories are so impressive because of the incredible challenges that had to be overcome and because such stories are incredibly rare.

There is also the obvious doubt about whether those who possess the greatest wealth earned the wealth in a way that justifies their incredible wealth. In the case of running, a person must earn her gold medal in the Olympic marathon by being the best runner. In such a case, there is little doubt that the achievement has been properly earned. However, the situation for great wealth is not as clear. Now, if a person arose from humble origins and by hard work, virtue, and talent managed to earn a fortune, then it seems fair to accept the justice of that wealth. However, if someone merely inherits a pile of cash or engages in misdeeds (like corruption or crime) to acquire the wealth, then it seems reasonable to regard that as unjust wealth.

As such, to the degree that the competition for wealth is open and fair and to the degree that the earning of wealth is proportional to merit, then the incredibly unbalanced distribution can be regarded as just. However, it seems evident that this is not the case.  For example, a quick review of the laws, tax codes, and so on will show quite nicely how the system is designed to work.

Suppose, for the sake of argument, that the distribution of wealth is actually warranted on grounds similar to the distribution of running trophies. That is, suppose that the competition is open, fair and the rewards are merit based. This still provides grounds for criticism of the radical concentration of wealth.

One obvious point is that the distribution of running trophies has no real impact. After all, a person can live just fine without any such trophies. As such, letting them be divided up by competition is fine—even if most trophies go to a few people. However, wealth is another matter. At the basic level, a degree of wealth is a necessity for survival. That is a person needs it (or, rather, what it can buy) to survive. Beyond mere survival, it also determines the material quality of life in terms of general health, clothing, living quarters, education, and entertainment and so on. Roughly put, wealth (loosely taken) is a necessity. To have such a competition when the well-being (and perhaps the survival) of people is at stake seems to be morally repugnant.

One obvious counter is a variation on the survival of the fittest arguments of the past. The basic idea is that, just like all living things, people have to compete to survive. As in nature, some people will not compete as well and hence they will have less and perhaps even not enough to survive. Others will do better and some few will do best of all.

The obvious reply is that this sort of competition makes some degree of sense when resources are so scarce that all cannot survive. To use a fictional example, if people are struggling to survive in a post-apocalyptic wasteland, then the competition for basic survival might be warranted by the reality of the situation. However, when resources are plentiful it seems morally repugnant for the tiny few to hyper-concentrate wealth while the many are left with very little. To use the obvious analogy, seeing a glutton stuffing herself with a vast tableful of delicacies while her guards keep people away so her minions call sell the scraps would strike all but the most callous as horrible. However, replace the glutton with one of the 1% and some folks are quite willing to insist that the situation is fair and just.

As a final point, the 1% also need to worry about the inequality of distribution. The social order which keeps the 99% from slaughtering the 1% requires that enough of the 99% believe that the situation is working for them. This can be done, to a degree, by coercion (police and military force) and delusion (this is where Fox News comes in). However, coercion and delusion have their limits and society, like all things, has a breaking point. While the rich can often escape a collapse in one country by packing up and heading to another (as dictators occasionally do), until space travel is a viable option the 1% are still stuck on earth with everyone else.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Sexbots are Persons, Too?

Page_1In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man [...] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.

Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.

It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.

In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.

Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Four kinds of philosophical people

We’ll begin this post where I ended the last. The ideal philosopher lives up to her name by striving for wisdom. In practice, the pursuit of wisdom involves developing a sense of good judgment when tackling very hard questions. I think there are four skills involved in the achievement of good judgment: self-insight, humility, rigor, and cooperativeness.

Even so, it isn’t obvious how the philosophical ideal is supposed to model actual philosophers. Even as I was writing the last post, I had the nagging feeling that I was playing the role of publicist for philosophy. A critic might say that I set out to talk about how philosophers were people, but only ended up stating some immodest proposals about the Platonic ideal of the philosopher. The critic might ask: Why should we think that it has any pull on real philosophers? Do the best professional philosophers really conceive of themselves in this way? If I have no serious answer to these questions, then I have done nothing more than indulged in a bit of cheerleading on behalf of my beloved discipline. So I want to start to address that accusation by looking at the reputations of real philosophers.

Each individual philosopher will have their own ideas about which virtues are worth investing in and which are worth disregarding. Even the best working philosophers end up neglecting some of the virtues over the others: e.g., some philosophers might find it relatively less important to write in order to achieve consensus among their peers, and instead put accent on virtues like self-insight, humility, and rigour. Hence, we should expect philosophical genius to be correlated with predictable quirks of character which can be described using the ‘four virtues’ model. And if that is true, then we should be able to see how major figures in the history of philosophy measure up to the philosophical ideal. If the greatest philosophers can be described in light of the ideal, we should be able to say we’ve learned something about the philosophers as people.

And then I shall sing to the Austrian mountains in my best Julie Andrews vibrato: “public relations, this is not“.

—-

In my experience, many skilled philosophers who work in the Anglo-American tradition will tend to have a feverish streak. They will tend to find a research program which conforms with their intuitions (some of which may be treated as “foundational” or givens), and then hold onto that program for dear life. This kind of philosopher will change her mind only on rare occasions, and even then only on minor quibbles that do not threaten her central programme. We might call this kind of philosopher a “programmist” or “anti-skeptic, since the programmist downplays the importance of humility, and is more interested in characterizing herself in terms of the other virtues like philosophical rigour.

You could name a great many philosophers who seem to hold this character. Patricia and Paul Churchland come to mind: both have long held the view that the progress of neuroscience will require the radical reformation of our folk psychological vocabulary. However, when I try to think of a modern exemplar of this tradition, I tend to think of W.V.O. Quine, who held fast to most of his doctrinal commitments throughout his lifetime: his epistemological naturalism and holism, to take two examples. This is just to say that Quine thought that the interesting metaphysical questions were answerable by science. Refutation of the deeper forms of skepticism was not very high on Quine’s agenda; if there is a Cartesian demon, he waits in vain for the naturalist’s attention. The most attractive spin on the programmist’s way of doing things is by saying they have raised philosophy to the level of a craft, if not a science.

—-

Programmists are common among philosophers today. But if I were to take you into a time machine and introduced you to the elder philosophers, then it would be easy to lose all sense of how the moderns compare with their predecessors. The first philosophers lived in a world where science was young, if not absent altogether; there was no end of mystery to how the universe got on. For many of them, there was no denying that skepticism deserved a place at the table. From what we can tell from what they left behind, many ancient philosophers (save Aristotle and Pythagoras) did not possess the quality that we now think of as analytic rigour. The focus was, instead, of developing the right kind of life, and then — well, living it.

We might think of this as a wholly different approach to being a philosopher than our modern friend the programmist. These philosophers were self-confident and autonomous, yet had plenty to say to the skeptic. For lack of a better term, we might call this sort of philosopher a “guru” or “informalist“. The informalist trudges forward, not necessarily with the light of reason and explicit argument, but of insight and association, often expressed in aphorisms. To modern professional philosophers and academic puzzle-solvers, the guru may seem like a specialist in woo and mysticism, a peddler of non-sequiturs. Many an undergraduate in philosophy will aspire to be a guru, and endure the scorn from their peers  (often, rightly administered).

Be that as it may, some gurus end up having a vital place in the history of modern philosophy. Whenever I think of the ‘guru’ type of philosopher, I tend to think of Frederich Nietzsche — and I feel justified in saying that in part because I guess that he would have accepted the title. For Nietzsche, insight was the single most important feature of the philosopher, and the single trait which he felt was altogether lacking in his peers.

Nietzsche was a man of passion, which is the reason why he is so easily misunderstood. Also, for a variety of reasons, Nietzsche was a man who suffered from intense loneliness. (In all likelihood, the fact that he was a rampant misogynist didn’t help in that department.) But he was also a preacher’s son, his rhetoric electric, his sermons brimming with insight and even weird lapses into latent self-deprecation. Moreover, he is a man who wrote in order to be read, and who was excited by the promise of new philosophers coming out to replace old canons. In the long run, he got what he wanted; as Walter Kaufman wrote, ”Nietzsche is one of the few philosophers since Plato whom large numbers of intelligent people read for pleasure”.

—-

“He has the pride of Lucifer.” — Russell on Wittgenstein

Some philosophers prefer to strike out on their own, paving an intellectual path by way of sheer stamina and force of will. We might call them the “lone wolves“. The lone wolf will often appear as a kind of contrarian with a distinctive personality. However the lone wolf is set apart from a mere devil’s advocate by virtue of the fact that she needs to pump unusually deep wellsprings of creativity and cleverness into her craft. Because she needs to strike off alone, the wolf has to be prepared to chew bullets for breakfast: there is no controversial position she is incapable of endorsing, so long as those positions qualify as valid moves in the game of giving and taking of reasons. She is out for adventure, to prove herself capable of working on her own. More than anything else, the lone wolf despises philosophical yes-men and yes-women. She has no time for the people who are satisfied by conventional wisdom — people who revere the ongoing dialectic as a sacred activity, a Great Conversation between the ages. The lone wolf says: the hell with this! These are problems, and problems are meant to be solved.

Ludwig Wittgenstein was a lone wolf, in the sense that nobody could quite refute Wittgenstein except for Wittgenstein. The philosophical monograph which made him famous, the Tractatus, began with an admission of idiosyncracy: “Perhaps this book will be understood only by someone who has himself already had the thoughts that are expressed in it—or at least similar thoughts.—So it is not a textbook.—Its purpose would be achieved if it gave pleasure to one person who read and understood it.” He was a private man, who published very little while alive, and whose positions were sometimes unclear even to his students. He was an intense man, reputed to have wielded a hot poker at one of his contemporaries. And he had an oracular style of writing — the Tractatus resembles an overlong Powerpoint presentation, while the Investigations was a free-wheeling screed. These qualities conspired to give the man himself an almost mythical quality. As Ernest Nagel wrote in 1936 (quoting a Viennese friend): “in certain circles the existence of Wittgenstein is debated with as much ingenuity as the historicity of Christ has been disputed in others”.

Wittgenstein’s work has lasting significance. His anti-private language argument is a genuine philosophical innovation, and widely celebrated as such. As such, he is the kind of philosopher that everybody has to know at least something about. But none of this came about by the power of idiosyncrasy alone. Wittgenstein achieved notoriety by demonstrating that he had a penetrating ability to go about the whole game of giving and taking reasons.

—-

“Synthesizers are necessarily dedicated to a vision of an overarching truth, and display a generosity of spirit towards at least wide swaths of the intellectual community. Each contributes partial views of reality, Aristotle emphasizes; so does Plotinus, and Proclus even more widely…” Randall Collins, The Sociology of Philosophies

Some philosophers are skilled at combining the positions and ideas that are alive in the ongoing conversation and weaving them into an overall picture. This is a kind of philosopher that we might call the “syncretist“. Much like the lone wolf, the syncretist despises unchallenged dogmatism; but unlike the lone wolf, this is not because she enjoys the prospect of throwing down the gauntlet. Rather, the syncretist enjoys the murmur of people getting along, engaged in a productive conversation. Hence, the syncretist is driven to reconcile opposing doctrines, so long as those doctrines are plausible. When she is at her best, the syncretist is able to generate a powerful synthesis out of many different puzzle pieces, allowing the conversation to become both more abstract without also becoming unintelligible. They do not just say, “Let a thousand flowers bloom” — instead, they demonstrate how the blooming of one flower only happens when in the company of others.

The only philosopher that I have met who absolutely exemplifies the spirit of the syncretist, and persuasively presents the syncretist as a virtuous standpoint in philosophy, is the Stanford philosopher Helen Longino. In my view, her book The Fate of Knowledge is a revelation.

A more infamous [example] of the syncretist, however, is Jurgen Habermas. Habermas is an under-appreciated philosopher, a figure who is widely neglected in Anglo-American philosophy departments and (for a time) was widely scorned in certain parts of Europe. True, Habermas is a difficult philosopher to read. And, in fairness, one sometimes gets the sense that his stuff is a bit too ecumenical to be motivated on its own terms. But part of what makes Habermas close to an ideal philosopher is that he is an intellectual who has read just about everything — he has partaken in wider conversations, attempting to reconcile the analytic tradition with themes that stretch far beyond its remit. Habermas also has a prodigious output: he has written on a countless variety of subjects, including speech act theory, the ethics of assertion, political legitimation, Kohlberg’s stages of moral development, collective action, critical theory and the theory of ideology, social identity, normativity, truth, justification, civilization, argumentation theory, and doubtless many other things. If a dozen people carved up his bibliography and each staked a claim to part of it, you’d end up with a dozen successful academic careers.

For some intellectuals, syncretism is hard to digest. Just as both mothers in the court of King Solomon might have felt equally betrayed, the unwilling subjects of the syncretist’s analysis may respond with ill tempers. In particular, the syncretist grates on the nerves of those who aspire to achieve the status of lone wolf intellectuals. Take two examples, mentioned by Dr. Finlayson (Sussex). On the one hand, Marxist intellectuals will sometimes like to accuse Habermas of “selling out” — for instance, because Habermas has abandoned the usual rhythms of dialectical philosophy by trying his hand at analytic philosophy. On the other hand, those in analytic philosophy are not always very happy to recognize Habermas as a precursor to the shape of analytic philosophy today. John Searle explains in an uncompromising review: “Habermas has no theory of social ontology. He has something he calls the theory of communicative action. He says that the ”purpose” of language is communicative action. This is wrong. The purpose of language is to perform speech acts. His concept of communicative action is to reach agreement by rational discussion. It has a certain irony, because Habermas grew up in the Third Reich, in which there was another theory: the ”leadership principle”.” I suspect that Searle got Habermas wrong, but nobody said life as a philosopher was easy.

—-

Everything I’ve said above is a cartoon sketch of some philosophical archetypes. It is worth noting, of course, that none of the philosophers I have mentioned will fit into the neat little boxes I have made for them. The vagaries of the human personality resist being reduced to archetypes. Even in the above, I cheated a little: Nietzsche is arguably as much a lone wolf as he is a guru. I also don’t mean to suggest that all professional philosophers will fit into anything quite like these categories. Some are by reputation much too close to the philosophical ideal to fit into an archetype. (Hilary Putnam comes to mind.) And other professional philosophers are nowhere close to the ideal — there is no shortage of philosophers behaving badly. I mean only to say something about how you can use the ‘four virtues’ model of wisdom to say something interesting about philosophers themselves.

(BLS Nelson is the author of this article. For more information about him, click here.)

Seeing philosophers as people

“It is the profession of philosophers,” David K. Lewis writes, “to question platitudes that others accept without thinking twice.” He adds that this is a dangerous profession, since “philosophers are more easily discredited than platitudes.” As it turns out, in addition to being a brilliant philosopher, Lewis was a master of understatement.

For some unwary souls, conversation with the philosopher can feel like an attack or assault. The philosopher’s favorite hobby is critical discussion, and this is almost guaranteed to be — shall we say – annoying. (Indeed, I am tempted to say that if it weren’t annoying, it would be a sign that something has gone wrong — that the conversation is becoming stale and irrelevant.) Ordinary folk, on the other hand, generally try to do what it takes to get along with others, which means being polite and trying to smooth over conflict, and it may seem as though the philosopher has terrible manners for asking too many uncomfortable questions. And the ordinary folk are sometimes quite right. Indeed, sometimes what passes for philosophy really is just a trivial bloodsport, a pointless game of denigration and insult with no productive bottom line that is disguised as disinterested inquiry (as illustrated by this hilarious spoof article).

The estrangement between philosophers and non-philosophers might owe to the fact that there is no strong consensus about what it means to be a philosopher. For one thing, philosophers are under external pressure to tell the world just who the hell they think they are. As funding is increasingly being diverted away from the humanities, the self-identity of the philosopher has started to be put under increased scrutiny. For another thing, the discipline is suffering from some internal strain. Analytic philosophy once had a strong mission statement: to clear up conceptual confusions by revealing how people were being fooled by grammar into committing to absurd theses. Unfortunately, over the past few decades the analytic philosopher’s confidence in their ability to do conceptual analysis has suffered. The tried and true philosophical reliance upon aprioristic reasoning has fallen increasingly out of favor, as greater awareness of insights from psychology and the social sciences have begun to undermine the credibility of distinctively philosophical inquiry. The harder that the social sciences encroach upon aprioristic terrain, the harder that rear-guard philosophers try to push back, and it is not at all obvious that they are winning the fight. It is against this background that Livengood et al. confess: “Many signs point to an identity crisis in contemporary philosophy. As a group, we philosophers are puzzled and conflicted about what exactly philosophy is.”

I don’t really think that philosophers should worry very much about their sense of identity, because there is a pretty straightforward way of characterizing the ideal philosopher. But in order to see why, it’s worth taking the time to think about what it means to be a philosopher: why it’s worth it, how non-philosophers can benefit from whatever the philosopher is up to, and how philosophers can figure out how to do their business better. We should start thinking more often about what the philosophical personality looks like, so that everyone can relate to philosophers as people.

A not-awful definition of philosophy could begin thus: “All philosophers are lovers — they are lovers of wisdom”. This gives due credit to the etymology of philosophy (which, of course, is commonly translated as ‘love of wisdom’.) But it also sounds a bit perverse. Indeed, when little Johnny comes back from Oxford after a year of study philosophy, and tells Mom that he has fallen in love with an abstract noun, one ought not be surprised if Mom frets for Johnny. So what I mean needs to be unpacked a little.

In the abstract, I would argue that wisdom involves at least four virtues: insight, prudence, reason, and fair-mindedness. In practice, I think, wisdom involves a degree of self-insight (the ability to articulate and weigh one’s intuitions), intellectual humility (the ability to actively poke at and potentially abandon those intuitions), intellectual rigor (the ability to reason through the implications of what one thinks), and cooperatively engaged (the ability to communicate one’s own convictions in a cooperative and illuminating way). That is the sort of person that the philosopher ought to be.

This is not to suggest that this ideal of the philosopher is one that every philosopher in every time in history would endorse. To choose a recent example, one prominent philosopher argued (tongue-in-cheek, I think) that contemporary philosophers just aren’t like that. He argues: “What is literally true is that we philosophers value knowledge, like our colleagues in other departments. Do we love knowledge? One might reasonably demur from such an emotive description.” Evidently, the working assumption is that the reader learning this information is better served if they lower their expectations of philosophy, instead of lowering their expectations of the people working in philosophy departments. I cannot think of any way to reasonably motivate this assumption.

But even if we thought that somehow the quoted author had it right, the history of the future would show him wrong. The greatest luminaries in philosophy, the great wise and dead, have a tendency to crowd out the loud and supercilious living. Their ability to command our attention owes to the fact that philosophical luminaries have always filled an essential cultural need: namely, they have helped to reinvent the idea of what it means to come to maturity, by striving to be insightful, humble, rigorous, and engaged. ‘The love of wisdom’ is not [just] a roundabout way of speaking about valuing knowledge — it is a way of talking about trying to be better as people. Philosophers ask us be at our best when they ask us to study wisdom for its own sake, because philosophy is as essential to adulthood as preschool is to the young.

This, I think, is a not-totally-unsatisfying way of looking at the ideal philosopher. But there is a lot missing. It doesn’t really capture the kind of energy that goes into doing philosophy, the nerdy thrill that goes into tackling the biggest questions you can think of. I have not given you any reason to think that the ideal of wisdom tells us anything about what real philosophers are like. I’m saving that for the next post.

[Substantial edit for clarity on Aug. 21]

(BLS Nelson is the author of this article. For more information about him, click here.)

Can Everyone be Wealthy?

Emma Goldman

Emma Goldman (the tattoo, not the person)/

It is sometimes asked whether or not everyone can be wealthy. This depends, obviously enough, on what is meant by “wealthy.” Determining what “wealthy” means requires sorting out the nature of wealth.

As might be imagined, there is a fair amount of debate about the true nature of  personal wealth.  While this oversimplifies things, a fairly standard view of wealth is that it consists of the net economic value of a person’s assets minus their liabilities. To be a bit more specific, these assets typically include possessions (cars, guns, art, computers, books, appliances, and so on), monetary resources (cash, for example) and capital resources. Not everyone buys into the stock view, of course. For example, Emma Goldman claimed that “real wealth consists in things of utility and beauty, in things that help to create strong, beautiful bodies and surroundings inspiring to live in. But if man is doomed to wind cotton around a spool, or dig coal, or build roads for thirty years of his life, there can be no talk of wealth.” As another example, some thinkers include non-economic goods (such as knowledge) within the realm of wealth. To keep thing simple and within our current economic system, I will limit the discussion to the “stock” account of wealth (that is, economic assets).

In our current economic system, it is obviously not the case that everyone is wealthy. When this fact is brought up, some folks like to claim that even the poor of today are wealthier than the wealthy of the past. In some ways, this is true. After all, the typically poor person in North America or the United Kingdom has possessions that not even the greatest pharaoh or Caesar possessed (such as a microwave oven). In many other ways, this is not true. After all, a wealthy noble of the past would have land, structures, gold, art, and so on that would make him a wealthy man even today. Also, there is the obvious fact that there are poor people today who are as poor as the poorest people in human history in that they possess just the tatters on their backs and just enough food to not die (at least for the moment). In any case, the fact that the sum total of wealth of humanity is greater now than in the past (even taking into account that there are so many more of us) does not tell us much beyond that (such as whether the current distribution is just or whether we can all be wealthy or not).

Getting back to the main subject, what needs to be determined is what is meant by “wealthy.” As noted above, I am limiting my discussion to economic wealth, but a bit more needs to be said.

In some ways, wealth can be seen as being analogous to height. A person has height if they have any vertical measurement at all. Likewise, a person has wealth if she has any economic assets in excess of her liabilities. This could be as little as a single penny or as much as billions of dollars. Obviously, everyone could (in theory) have wealth, just as everyone can have height. But, of course, a person is not wealthy just because s/he has wealth, no more than a person is tall simply because s/he has height. On the other side, lacking wealth is described as being destitute and lacking height is described as being short.

Continuing the analogy, being wealthy or wealthier  can be seen as analogous to being tall or taller. Being tall means having more height than average  and being taller than another means having more height than that person. Likewise being wealthy would seem to mean having more wealth than average and being wealthier than another means having more wealth than that person. If this view is correct, then we cannot all be wealthy anymore than we can all be tall. Obviously, we could all have the same height or the same wealth, but the terms “tall” and “wealthy” would have no application in these cases. As such, we cannot all be wealthy-if we had the same amount of wealth, then no one would be wealthy.

It could be contended that being wealthy is not a matter of comparison to the wealth of other people, but rather a matter of having economic assets that meet a specified level. Depending on how that level was specified, then everyone could (in theory) be wealthy. Of course, the question of whether or not such a level should be considered wealthy or not would be a matter of debate.

It might be contended that focusing on whether or not everyone can be wealthy is not as important (or interesting) as the question of whether or not everyone can be well-off in the sense of having adequate resources for a healthy and meaningful existence. This is, of course, a subject for another time.

Enhanced by Zemanta

Just Doesn’t Get It

Rhetoric of Reason

Image via Wikipedia

When it comes to persuading people, a catchy bit of rhetoric tends to be far more effective than an actual argument. One rather neat bit of rhetoric that seems to be favored by Tea Party folks and others is the “just doesn’t get it” device.

As a rhetorical device, it is typically used with the intent of dismissing or rejecting a person’s (or group’s) claims or views. For example, someone might say “liberals just don’t get it. They think raising taxes is the way to go.” The idea is that the audience is supposed to accept that liberals are wrong about tax increases on the grounds that its has been asserted that they “just don’t get it.”Obviously enough, saying “they just don’t get it” does not prove that a claim or view is in error.

This method can also be cast as a fallacy, specifically an ad hominem. The idea is that a claim should be rejected based on a personal attack, namely the assertion that the person does not get it. It can also be seen as a genetic fallacy when used against a group.

This method is also sometimes used with the intent of showing that a view is correct, usually by claiming that someone (or some group) that (allegedly) disagrees is wrong. For example, someone might say “liberals just don’t get it. Raising taxes on the job creators hurts the economy.” Obviously enough, saying that someone (or some group) “just doesn’t get it” does not prove (or disprove) anything. What is needed is, obviously enough, evidence that the claim in question is true. In the example, this would involve showing that raising taxes on the job creators hurts the economy.

In general, the psychology behind this method seems to be that when a person says  (or hears)”X doesn’t get it”, he means (or takes it to mean)”X does not believe what I believe” and thus rejects X’s claim. Obviously enough, this is not good reasoning.

It is worth noting that if it can be shown that someone “just doesn’t get it”, then this would not be mere rhetoric or a fallacy. However, what would be needed is evidence that the person is in error and thus does not, in fact, get it.

Enhanced by Zemanta

Human, Really?

Kuhn used the duck-rabbit optical illusion to ...
Image via Wikipedia

Sharon Begley recently wrote an interesting article, “What’s Really Human?” In this piece, she presents her concern that American psychologists have been making hasty generalizations over the years. To be more specific, she is concerned that such researchers have been extending the results gleaned from studies of undergraduates at American universities to the entire human race. For example, findings about what college students think about self image are extended to all of humanity.

She notes that some researchers have begun to question this approach and have contended that American undergraduates are not adequate representatives of the entire human race in terms of psychology.

In one example, she considers the optical illusion involving two line segments. Although the segments have the same length, one has arrows  on the ends pointing outward and the other has the arrows pointing inward. To most American undergraduates, the one with the inward pointing arrows looks longer.  But when the San of the Kalahari, African hunter-gatherers, look at the lines, they judge them to be the same length. This is taken to reflect the differing conditions.

This result is, of course, hardly surprising. After all, people who live in different conditions will tend to have different perceptual skill sets.

Begley’s second example involves the “ultimatum game” that is typical of the tests that are intended to reveal truths about human nature via games played with money. The gist of the game is that there are two players, A and B. In this game, the experimenter gives player A $10. A then must decide how much to offer B. If B accepts the deal, they both get the money. If B rejects the deal, both leave empty handed.

When undergraduates in the States play, player A will typically offer $4-5 while those playing B will most often refuse anything below $3. This is taken as evidence that humans have evolved a sense of justice that leads us to make fair offers and also to punish unfair ones-even when doing so means a loss. According to the theorists, humans do this because we evolved in small tribal societies and social cohesion and preventing freeloaders (or free riders as they are sometimes called) from getting away with their freeloading.

As Begley points out, when “people from small, nonindustrial societies, such as the Hadza foragers of Tanzania, offer about $2.50 to the other player—who accepts it. A “universal” sense of fairness and willingness to punish injustice may instead be a vestige of living in WEIRD, market economies.”

While this does provide some evidence for Begley’s view, it does seem rather weak. The difference between the Americans and the Hadza does not seem to be one of kind (that is, Americans are motived by fairness and the Hadza are not). Rather, it seems plausible to see this is terms of quantity. After all, Americans refuse anything below $3 while the Hazda’s refusal level seems to be only 50 cents less. This difference could be explained in terms not of culture but of relative affluence. After all, to a typical American undergrad, it is no big deal to forgo $3. However, someone who has far less (as is probably the case with the Hazda) would probably be willing to settle for less.

To use an analogy, imagine playing a comparable game using food instead of money. If I had recently eaten and knew I had a meal waiting at home, I would be more inclined to punish a small offer than accept it. After all, I have nothing to lose by doing so and would gain the satisfaction of denying my “opponent” her prize. However, if we were both very hungry and I knew that my cupboards were bare, then I would be much more inclined to accept a smaller offer on the principle that some food is better than none.

Naturally, cultural factors could also play a role in determining what is fair or not. After all, if A is given the money, B might regard this as A’s property and that A is being generous in sharing anything. This would show that culture is a factor, but this is hardly a shock. The idea of a universal human nature is quite consistent with it being modified by specific conditions. After all, individual behavior is modified by such conditions. To use an obvious example, my level of generosity depends on the specifics of the situation such as the who, why, when and so on.

There is also the broader question of whether such money games actually reveal truths about justice and fairness. This topic goes beyond the scope of this brief essay, however.

Begley finishes her article by noting that “the list of universals-that-aren’t kept growing.” That is, allegedly universal ways of thinking and behaving have been found to not be so universal after all.

This shows that contemporary psychology is discovering what Herodotus noted thousands of years ago, namely that “custom is king” and what the Sophists argued for, namely relativism. Later thinkers, such as Locke and other empiricists, were also critical of the idea of universal (specifically innate) ideas. In contrast, thinkers such as Descartes and Leibniz argued for the idea of universal (specifically innate) ideas.

I am not claiming that these thinkers are right (or wrong), but it certainly interesting to see that these alleged “new discoveries” in psychology are actually very, very old news. What seems to be happening in this cutting edge psychology is a return to the rationalist and empiricist battles over the innate content of the mind (or lack thereof).

Enhanced by Zemanta

Being a Man V: Birds & Bees

Male Scarlet Robin (Petroica boodang) in the M...
Image via Wikipedia

After reading an article in National Geographic about orchids and evolution, the idea struck me that it makes sense to look at being a man in the context of evolutionary theory. In the case of the orchid article, the idea was that the amazing adaptations of orchids (for example, imitating female insects so as to attract pollinators) can all be explained in terms of natural selection. While humans have a broader range of behavior than orchids, the same principle would seem to apply.

Crudely and simply put, the theory is that organisms experience random mutations and these are selected for (or against) by natural processes. Organisms that survive and reproduce pass on their genes (including the mutations). Those that do not reproduce, do not pass on their genes. Over time, this process of selection  can result in significant changes in a species or even the creation of new species. While there are no purposes or goals in this “system”, it can create the appearance of design: organisms that survive will be well suited to the conditions in which they live. This is, of course, not design-if they did not fit, they would not survive to be there.

Getting back to being a man, evolution has shaped men via this process of natural selection. As such, the men who are here now are descended from men who had qualities that contributed to their surviving and reproducing. These men will, in turn, go through the natural selection process. In the case of humans, the process is often more complicated than that of birds, bees and orchids. However, as noted above, the basic idea is the same.  The “men” of the non-human species have  a set of behaviors that define this role. In most cases, the majority of these behaviors (nest building, fighting, displaying, and so on) are instinctual. In the case of humans, some of the behavior is probably hard-wired, but much of it is learned behavior. However, if one buys into evolutionary theory, what lies behind all this is the process of evolution. As such, being a man would simply be an evolutionary “strategy” that arose out of the process of natural selection. As such, being a man is on par with being a drake, a bull or a steer.  That is, it involves being in a gender role that is typically occupied by biological males.

Of course, this does not help a great deal in deciding how one should act if one wants to be a man in a meaningful sense. But, evolution is not about what one ought to do. It is simply about what is: survive and be selected, or fail and be rejected. That said, looking at comparable roles in the animal kingdom as well as considering the matter of evolution (and biology) might prove useful in looking at the matter scientifically.

Reblog this post [with Zemanta]

Being a Man I: Social Construct

my 1960s wedding suit

Apparently some men are having trouble figuring out what it is to be a man. There are various groups and individuals that purport to be able to teach men how to be men (or at least dress like the male actors on the show Mad Men).

Before a person can become a man, it must be known what it is to be a man.  There are, of course, many conceptions about what it is to be a man.

One option is to take the easy and obvious approach: just go with the generally accepted standards of  society. After all, a significant part of being a man is being accepted as a man by other people.

On a large scale, each society has a set of expectations, stereotypes and assumptions about what it is to be a man. These can be taken as forming a set of standards regarding what one needs to be and do in order to be  a man.

Naturally, there will be conflicting (even contradictory) expectations so that meeting the standards for being a man will require selecting a specific subset. One option is to select the ones that are accepted by the majority or by the dominant aspect of the population. This has the obvious advantage that this sort of manliness will be broadly accepted.

Another option is to narrow the field by selecting the standards held by a specific group. For example, a person in a fraternity might elect to go with the fraternities view of what it is to be a man (which will probably involve the mass consumption of beer). On the plus side, this enables a person to clearly  be a man in that specific group. On the minus side, if the standards (or mandards) of the group differ in significant ways from the more general view of manliness, then the individual can run into problems if he strays outside of his mangroup.

A third option is to attempt to create your own standards of being a man and getting them accepted by others (or not). Good luck with that.

Of course, there is also the question of whether there is something more to being a man above and beyond the social construction of manliness. For some theorists, gender roles and identities are simply that-social constructs. Naturally, there is also the biological matter of being a male, but being biologically male and being a man are two distinct matters. There is a clear normative aspect to being a man and merely a biological aspect to being male.

If being a man is purely a matter of social construction (that is, we create and make up gender roles) than being a man in group X simply involves meeting the standards of being a man in group X. If that involves owing guns, killing animals, and chugging beer while watching porn and sports, then do that to be a man. If it involves sipping lattes, talking about Proust,  listening to NPR  and talking about a scrumptious quiche, then do that. So, to be a man, just pick your group, sort out the standards and then meet them as best you can.

In many ways, this is comparable to being good: if being good is merely a social construct, then to be good you just meet the moral standards of the group in question. This is, of course, classic relativism (and an approach endorsed by leading sophists).

But perhaps being a man is more than just meeting socially constructed gender standards. If so, a person who merely meets the “mandards” of being a man in a specific group might think he is a man, but he might be mistaken. But, this is a matter for another time.

Reblog this post [with Zemanta]

Rhetorical Overkill

Adolf Hitler portrait, bust, 3/4 facing right.

Image via Wikipedia

As part of my critical thinking class, I teach a section on rhetoric. While my main concern is with teaching students how to defend against it, I also discuss how to use it. One of the points I make is that a risk with certain forms of rhetoric is what I call rhetorical overkill. This is  commonly done with hyperbole which is, by definition, an extravagant overstatement.

One obvious risk with hyperbole is that if it is too over the top, then it can be ineffective or even counterproductive. If a person is trying to use positive hyperbole, then going too far can create the impression that the person is claiming the absurd or even mocking the subject in question. For example, think of the over the top infomercials where the product is claimed to do  everything but cure cancer.  If the person is trying to use negative hyperbole, then going too far can undercut the attack by making it seem ridiculous. For example, calling a person a Nazi because he favors laws requiring people to use seat belts would seem rather absurd.

Another risk is that hyperbole can create an effect somewhat like crying “wolf”. In that tale, the boy cried “wolf” so often that no one believed him when the wolf actually came. In the case of rhetorical overkill, the problem is that it can create what might be dubbed “hyperbolic fatigue.” If matters are routinely blown out of proportion, this will tend to numb people to such terms. On a related note, if politicians and pundits routinely cry “Hitler” or “apocalypse” over lesser matters what words will they have left when the situation truly warrants such terms?

In some ways, this  is like swearing. While I am not a prude, I prefer to keep my swear words in reserve for situations that actually merit them. I’ve noticed that many people tend to use swear words in everyday conversations and I found this a bit confusing at first. After all, I have “hierarchy of escalation” when it comes to words, and swear words are at the top.  But, for many folks today, swear words are just part of everyday conversation (even in the classroom). So, when someone swears at me now, I pause to see if they are just talking normally or if they are actually trying to start trouble.

While I rarely swear, I do resent the fact that swear words have become so diluted and hence less useful to make a point quickly and directly. The same applies to extreme language-if we do not reserve it for extreme circumstances, then we diminish our language by robbing extreme words of their corresponding significance.

So, what the f@ck do you think?

Reblog this post [with Zemanta]