Category Archives: Critical Thinking

Checking ‘Check Your Privilege”

Privilege (album)

Privilege (album) (Photo credit: Wikipedia)

As a philosopher, I became familiar with the notion of the modern political concept of privilege as a graduate student—sometimes in classes, but sometimes in being lectured by other students about the matter. Lest anyone think I was engaged in flaunting my privileges, the lectures were always about my general maleness and my general appearance of whiteness (I am actually only mostly white) as opposed to any specific misdeed I had committed as a white-appearing male. I was generally sympathetic to most criticisms of privilege, but I was not particularly happy when people endeavored to use a person’s membership in a privileged class as grounds for rejecting the person’s claims out of hand. Back then, there was no handy phrase to check a member of a privileged class. Fortunately (or unfortunately) such a phrase has emerged, namely “check your privilege!”

The original intent of the phrase is, apparently, to remind a person making a claim on a political (or moral) issue that he is speaking from a position of privilege, such as being a male or straight. While it is most commonly used against members of what can be regarded as the “traditional” privileged classes (males, whites, the wealthy, etc.) it can also be employed against people of classes that are either privileged relative to the classes they are commenting on or in different non-privileged class. For example, a Latina might be told to “check her privilege” for making a remark about black women. In this case, the idea is to remind the transgressors that different oppressed groups experience their oppression differently.

As might be imagined, many people take issue with being told to “check their privilege!” in some cases, this can be mere annoyance with the phrase. This annoyance can have some foundation, given that the phrase can have a hostile connotation and the fact that it can seem like a dismissive reply.

In other cases, the use of the phrase can be taken as an attempt to silence someone. Roughly put, “check your privilege” can be interpreted as “stop talking” or even as “you are wrong because you belong to a privileged class.” In some cases, people are interpreting the use incorrectly—but in other cases they are interpreting quite correctly.

Thus, the phrase can be seen as having two main functions (in addition to its dramatic and rhetorical use). One is as a reminder, the other is as an attack. I will consider each of these in the context of critical thinking.

The reminder function of the phrase does have legitimacy in that it is grounded in a real need to remind people of two common cognitive biases, namely in group bias and attribution error. In group bias is the name for the tendency people have to easily form negative opinions of people who are not in their group (in this case, an allegedly privileged class). This bias leads people to regard members of their own group more positively (attributing positive qualities and assessments to their group members) while regarding members of other groups more negatively (attributing negative qualities and assessments to these others). For example, a rich person might regard other rich people as being hardworking while regarding poor people as lazy, thieving and inclined to use drugs. As another example, a woman might regard her fellow women as kind and altruistic while regarding men as violent, sex-crazed and selfish.

Given the power of this bias, it is certainly worth reminding people of it—especially when their remarks show signs that this bias is likely to be in effect. Of course, telling someone to “check their privilege” might not be the nicest way to engage in the discussion and it is less specific than “consider that you might be influenced by in group bias.”

Attribution error is a bias that leads people to tend to fail to appreciate that other people are as constrained by events and circumstances as they would be if they were in their situation. For example, consider a discussion about requiring voters to have a photo ID, reducing the number of polling stations and reducing their hours. A person who is somewhat well off might express the view that getting an ID and driving across town to a polling station on his lunch break is no problem—because it is no problem for him. However, for someone who does not have a car and is very poor, these can be serious obstacles. As another example, someone who is rich might express the view that the poor should not be helped because they are obviously poor because they are lazy (and not because of the circumstances they face, such as being born into poverty).

Given the power of this bias, a person who seems to making this error should certainly be reminded of this possibility. But, of course, telling the person to “check their privilege” might not be the most diplomatic way to engage and it is certainly less specific than pointing out the likely error. But, given the limits of Twitter, it might be a viable option when used in this social media context.

In regards to the second main use, using it to silence a person or to reject the person’s claim would not be justified. While it is legitimate to consider the effects of biases, to reject a person’s claim because of their membership in a specific class would be an ad hominen of some sort.  An ad hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

1. Person A makes claim X.

2. Person B makes an attack on person A.

3. Therefore A’s claim is false.

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

Because of the usage of the “check your privilege” in this role, I’d suggest a minor addition to the ad hominem family, the check your privilege ad hominem:

1. Person A makes claim X.

2. Person B tells A to “check their privilege” based on A’s membership in group G.

3. Therefore A’s claim is false.

This is, obviously enough, bad reasoning.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

 

The Speed of Rage

English: A raging face.

(Photo credit: Wikipedia)

The rise of social media has created an entire new world for social researchers. One focus of the research has been on determining how quickly and broadly emotions spread online. The April 2014 issue of the Smithsonian featured and article on this subject by Matthew Shaer.

Not surprisingly, researchers at Beijing University found that the emotion of rage spread the fastest and farthest online. Researchers in the United States found that anger was a speed leader, but not the fastest in the study: awe was even faster than rage. But rage was quite fast. As might be expected, sadness was a slow spreader and had a limited expansion.

This research certainly makes sense—rage tends to be a strong motivator and sadness tends to be a de-motivator. The power of awe was an interesting finding, but some reflection does indicate that this would make sense—the emotion tends to move people to want to share (in the real world, think of people eagerly drawing the attention of strangers to things like beautiful sunsets, impressive feats or majestic animals).

In general, awe is a positive emotion and hence it seems to be a good thing that it travels far and wide on the internet. Rage is, however, something of a mixed bag.

When people share their rage via social media, they are sharing with an intent to express (“I am angry!”) and to infect others with this rage (“you should be angry, too!”). Rage, like many infectious agents, also has the effect of weakening the host’s “immune system.” In the case of anger, the immune system is reason and emotional control. As such, rage tends to suppress reason and lower emotional control. This serves to make people even more vulnerable to rage and quite susceptible to the classic fallacy of appeal to anger—this is the fallacy in which a person accepts her anger as proof that a claim is true. Roughly put, the person “reasons” like this: “this makes me angry, so it is true.” This infection also renders people susceptible to related emotions (and fallacies), such as fear (and appeal to force).

Because of these qualities of anger, it is easy for untrue claims to be accepted far and wide via the internet. This is, obviously enough, the negative side of anger.  Anger can also be positive—to use an analogy, it can be like a cleansing fire that sweeps away brambles and refuse.

For anger to be a positive factor, it would need to be a virtuous anger (to follow Aristotle). Put a bit simply, it would need to be the right degree of anger, felt for the right reasons and directed at the right target. This sort of anger can mobilize people to do good. For example, people might learn of a specific corruption rotting away their society and be moved to act against it. As another example, people might learn of an injustice and be mobilized to fight against it.

The challenge is, of course, to distinguish between warranted and unwarranted anger. This is a rather serious challenge—as noted above, people tend to feel that they are right because they are angry rather than inquiring as to whether their rage is justified or not.

So, when you see a post or Tweet that moves you anger, think before adding fuel to the fire of anger.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Scientism, Quietism and Continental Philosophy

Peter Unger was recently interviewed about his new book that critiques Analytic Philosophy, and in the interview he says a lot of things that plenty of Continental Philosophers would not disagree with. But his response is not to turn to Continental philosophy – not at all. Even Bertrand Russell is, in essence, too “Continental” in tone for Unger. He quotes Russell contemplating the value of philosophy as not something that seeks answers, because the questions of philosophy cannot be determinately answered, but rather as expanding the intellectual imagination, and then dismisses this as “nonsense.”

Unger’s reasoning seems to be that a test could be done to check how creative or dogmatic a person is, which presumably means that we could check whether studying philosophy does or does not enrich our intellectual imagination. This misses the point on two levels – we don’t do such tests so his argument is moot to start with, but more important, the idea is that those who grasp the value of philosophy will be affected by definition; those who don’t are misunderstanding its purpose.

We owe the word to Socrates, who distinguished between sophists, those who merely argue for the sake of it, and philosophers, lovers of wisdom. Socrates famously tells the story of his realization that the Oracle at Delphi may not have been wrong in proclaiming him the wisest man in Athens when he defines what it really means to be wise. He knows that he knows nothing while the other men think they have answers. To believe oneself to have things more figured out than everyone else – as Unger, it’s worth noting, repeatedly does – is a form of egotism disappointing to see in a mind meant to be devoted to the nature of being. One man’s capacities may exceed another’s when we are comparing everyday activities but when the ability at issue is the comprehension of the infinite, the significance is surely reduced. All our lives are short in comparison to the age of the universe.

Unger does mention the Ancients – he says “He [Kit Fine] has no more idea of what he’s doing than Aristotle did, and in Aristotle’s day there was an excuse: nobody knew anything”. This attitude shows his commitment to the scientistic point of view. He states at the outset of the interview that the goal of philosophy is to “write up deep stories which are true, or pretty nearly true, about how it is with the world. By that I especially mean the world of things that includes themselves, and everything that’s spatio-temporally related to them, or anything that has a causal effect on anything else, and so on.” Of course, a phrase like “and so on” may mislead, but it certainly does not sound as if Unger has any interest in questions of meaning or human experience. His dismissal of Ancient investigations as hopeless is particularly telling, though. What does it mean to claim that they “knew nothing”? In some ways, they were more aware of much that we’ve since forgotten – the rotation of the seasons, the placement of the stars, the behavior of animals or the preparation of foods that were common knowledge are now specialized or in some cases, just unavailable (e.g., consider light pollution in regards to the night sky). Being industrialized has increased technology but technology is not equivalent to knowledge – it’s just one form of knowledge.

Analytic philosophers who discover (after already becoming philosophers) that philosophy is not a form of science often propose that the answer is to give up philosophy altogether – turn out the lights and go home. Doing this as a book in the genre tends to seem a bit hypocritical, but then, the Analytic thinkers who do give it up will only have the chance to make the argument at cocktail parties. More worth addressing is the fact that Unger avoids mentioning the Continental approach at all. He suggests that philosophy may be “literature” for some, but what this means is unclear (beyond its implying a general worthlessness). From outside the Analytic tradition, philosophy is not the same as literature, but it’s the not the same as science either. It has its own category, as the exploration and contextualization of our place in the world.

As Emerson said, each age must write its own books. The wisdom of the past cannot be genetically infused into the next generation. Information is handed down, but true understanding has to be struggled through again and again, and grasped within each particular culture or time.

One last thought: The writer of the interview might think I’m recommending meditation and enlightenment, per the bookstore mentioned at the end of her piece. While I’m not, I think it’s worth bringing up that there are plenty of books in Western philosophy stores that are just as silly as those self-help texts look (was there one about Plato and a Platypus recently?), and Eastern texts that are worthwhile. Unger defines it as all the same in value (“nothing much”) while different in type (“this” vs “that”) whereas I would say it is the difference in value which is paramount; the types may blend together and overlap given that the subject is so great.

Science & Self-Identity

English: The smallpox vaccine diluent in a syr...

 (Photo credit: Wikipedia)

The assuming an authority of dictating to others, and a forwardness to prescribe to their opinions, is a constant concomitant of this bias and corruption of our judgments. For how almost can it be otherwise, but that he should be ready to impose on another’s belief, who has already imposed on his own? Who can reasonably expect arguments and conviction from him in dealing with others, whose understanding is not accustomed to them in his dealing with himself? Who does violence to his own faculties, tyrannizes over his own mind, and usurps the prerogative that belongs to truth alone, which is to command assent by only its own authority, i.e. by and in proportion to that evidence which it carries with it.

-John Locke

As a philosophy professor who focuses on the practical value of philosophical thinking, one of my main objectives is to train students to be effective critical thinkers. While true critical thinking has been, ironically, threatened by the fact that it has become something of a fad, I stick with a very straightforward and practical view of the subject. As I see it, critical thinking is the rational process of determining whether a claim should be accepted as true, rejected or false or subject to the suspension of judgment. Roughly put, a critical thinker operates on the principle that the belief in a claim should be proportional to the evidence for it, rather than in proportion to our interests or feelings. In this I follow John Locke’s view: “Whatsoever credit or authority we give to any proposition more than it receives from the principles and proofs it supports itself upon, is owing to our inclinations that way, and is so far a derogation from the love of truth as such: which, as it can receive no evidence from our passions or interests, so it should receive no tincture from them.” Unfortunately, people often fail to follow this principle and do so in matters of considerable importance, such as climate change and vaccinations. To be specific, people reject proofs and evidence in favor of interests and passions.

Despite the fact that the scientific evidence for climate change is overwhelming, there are still people who deny climate change. These people are typically conservatives—although there is nothing about conservatism itself that requires denying climate change.

While rejecting the scientific evidence for climate change can be regarded as irrational, it is easy enough to attribute a rational motive behind this view. After all, there are people who have an economic interest in denying climate change or, at least, preventing action from being taken that they regard as contrary to their interests (such as implementing the cap and trade system on carbon originally proposed by conservative thinkers). This interest would provide a motive to lie (that is, make claims that one knows are not true) as well as a psychological impetus to sincerely hold to a false belief. As such, I can easily make sense of climate change denial in the face of overwhelming evidence: big money is on the line. However, the denial less rational for the majority of climate change deniers—after all, they are not owners of companies in the fossil fuel business. However, they could still be motivated by a financial stake—after all, addressing climate change could cost them more in terms of their energy bills. Of course, not addressing climate change could cost them much more.

In any case, I get climate denial in that I have a sensible narrative as to why people reject the science on the basis of interest. However, I have been rather more confused by people who deny the science regarding vaccines.

While vaccines are not entirely risk free, the scientific evidence is overwhelming that they are safe and very effective. Scientists have a good understanding of how they work and there is extensive empirical evidence of their positive impact—specifically the massive reduction in cases of diseases such as polio and measles. Oddly enough, there is significant number of Americans who willfully deny the science of vaccination. What is most unusual is that these people tend to be college educated. They are also predominantly political liberals, thus showing that science denial is bi-partisan. It is fascinating, but also horrifying, to see someone walk through the process of denial—as shown in a segment on the Daily Show. This process is rather complete: evidence is rejected, experts are dismissed and so on—it is as if the person’s mind switched into a Bizzaro version of critical thinking (“kritikal tincing” perhaps). This is in marked contrast with the process of rational disagreement in which the methodology of critical thinking is used in defense of an opposing viewpoint. Being a philosopher, I value rational disagreement and I am careful to give opposing views their due. However, the use of fallacious methods and outright rejection of rational methods of reasoning is not acceptable.

As noted above, climate change denial makes a degree of sense—behind the denial is a clear economic interest. However, vaccine science denial seems to lack that motive. While I could be wrong about this, there does not seem to be any economic interest that would benefit from this denial—except, perhaps, the doctors and hospitals that will be treating the outbreaks of preventable diseases. However, doctors and hospitals obviously encourage vaccination. As such, an alternative explanation is needed.

Recent research does provide some insight into the matter and this research is consistent with Locke’s view that people are influenced by both interests and passions. In this case, the motivating passion seems to be a person’s commitment to her concept of self. The idea is that when a person’s self-concept or self-identity is threatened by facts, the person will reject the facts in favor of her self-identity.  In the case of the vaccine science deniers, the belief that vaccines are harmful has somehow become part of their self-identity. Or so goes the theory as to why these deniers reject the evidence.

To be effective, this rejection must be more than simply asserting the facts are wrong. After all, the person is aiming to deceive herself to maintain her self-identity. As such, the person must create an entire narrative which makes their rejection seem sensible and believable to them. A denier must, as Pascal said in regards to his famous wager, make himself believe his denial. In the case of matters of science, a person needs to reject not just the claims made by scientists but also the method by which the scientists support the claims. Roughly put, the narrative of denial must be a complete story that protects itself from criticism. This is, obviously enough, different from a person who denies a claim on the basis of evidence—since there is rational support for the denial, there is no need to create a justifying narrative.

This, I would say, is one of the major dangers of this sort of denial—not the denial of established facts, but the explicit rejection of the methodology that is used to assess facts. While people often excel at compartmentalization, this strategy runs the risk of corrupting the person’s thinking across the board.

As noted above, as a philosopher one of my main tasks is to train people to think critically and rationally. While I would like to believe that everyone can be taught to be an effective and rational thinker, I know that people are far more swayed by rhetoric and (ironically) fallacious reasoning then they are swayed by good logic. As such, there might be little hope that people can be “cured” of their rejection of science and reasoning. Aristotle took this view—while noting that some can be convinced by “arguments and fine ideals” most people cannot. He advocated the use of coercive habituation to get people to behave properly and this could (and has) been employed to correct incorrect beliefs. However, such a method is agnostic in regards to the truth—people can be coerced into accepting the false as well as the true.

Interestingly enough, a study by Brendan Nyhan shows that reason and persuasion both fail when employed in attempts to change false beliefs that are critical to a person’s self-identity. In the case of Nyhan’s study, there were various attempts to change the beliefs of vaccine science deniers using reason (facts and science) and also various methods of rhetoric/persuasions (appeals to emotions and anecdotes). Since reason and persuasion are the two main ways to convince people, this is certainly a problem.

The study and other research did indicate an avenue that might work. Assuming that it is the threat to a person’s self-concept that triggers the rejection mechanism, the solution is to approach a person in a way that does not trigger this response. To use an analogy, it is like trying to conduct a transplant without triggering the body’s immune system to reject the transplanted organ.

One obvious problem is that once a person has taken a false belief as part of his self-concept, it is rather difficult to get him to regard any attempt to change his mind as anything other than a threat. Addressing this might require changing the person’s self-concept or finding a specific strategy for addressing that belief that is somehow not seen as a threat. Once that is done, the second stage—that of actually addressing the false belief, can begin.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The real reason why libertarians become climate-deniers

We live at a point in history at which the demand for individual freedom has never been stronger — or more potentially dangerous. For this demand — the product of good things, such as the refusal to submit to arbitrary tyranny characteristic of ‘the Enlightenment’, and of bad things, such as the rise of consumerism at the expense of solidarity and sociability — threatens to make it impossible to organise a sane, collective democratic response to the immense challenges now facing us as peoples and as a species. ”How dare you interfere with my ‘right’ to burn coal / to drive / to fly; how dare you interfere with my business’s ‘right’ to pollute?” The form of such sentiments would have seemed plain bizarre, almost everywhere in the world, until a few centuries ago; and to uncaptive minds (and un-neo-liberalised societies) still does. …But it is a sentiment that can seem close to ‘common sense’ in more and more of the world: even though it threatens to cut off at the knees action to prevent existential threats to our collective survival, let alone our flourishing.

Such alleged rights to complete (sic.) individual liberty are expressed most strongly by ‘libertarians’.

Now, before I go any further (because you already know from my title that this article is going to be tough on libertarians), I should like to say for the record that some of my best friends (and some of those I most intellectually admire) are libertarians. Honestly: I mean it. Being of a libertarian cast of mind can be a sign of intellectual strength, of fibre; of a healthy iconoclasm. It can entail intellectual autonomy in its true sense. A libertarian of one kind or another can be a joy to be around.

But too often, far too often, ‘libertarianism’ nowadays involves a fantasy of atomism; and an unhealthy dogmatic contrarianism. Too often, ironically, it involves precisely the dreary conformism so wonderfully satirized at the key moment in The life of Brian, where the crowd repeats, altogether, like automata, the refrain “We are all individuals”. Too often, libertarians to a man (and, tellingly, virtually all rank-and-file libertarians are males) think that they are being radical and different: by all being exactly the same as each other. Dogmatic, boringly-contrarian hyper-‘individualists’ with a fixed set of beliefs impervious to rational discussion. Adherents of an ‘ism’, in the worst sense.

Such ‘libertarianism’ is an ideology that seems to have found its moment, or at least its niche, in a consumerist economistic world that is fixated on the alleged specialness and uniqueness of the individual (albeit that, as already made plain, it is hard to square the notion that this is or could be libertarianism’s ‘moment’ with the most basic acquaintance with the social and ecological limits to growth as our societies are starting literally to encounter them). ‘Libertarianism’ is evergreen in the USA, but, bizarrely, became even more popular in the immediate wake of the financial crisis (A crisis caused, one might innocently have supposed, by too much license being granted to many powerless and powerful economic actors: in the latter category, most notably the banks and cognate dubious financial institutions…). In the UK, it is a striking element in the rise to popularity of UKIP: for, while UKIP is socially-regressive/reactionary, it is very much a would-be libertarian party, the rich man’s friend, in terms of its economic ambitions: it is for a flat tax, for ‘free-trade’-deals the world over, for a bonfire of regulations, for the selling-off of our public services, and so on. (Incidentally, this makes the apparent rise in working-class (or indeed middle-class) support for UKIP at the present time an exemplary case of turkeys voting for Christmas. Someone who isn’t one of the richest 1% who votes UKIP is acting as a brilliant ally of their own gravediggers.)

This article concerns a contradiction at the heart of the contemporary strangely-widespread ‘ism’ that is libertarianism. A contradiction that, once it is understood, essentially destroys whatever apparent attractions it may have. And, surprisingly, shows libertarianism now to be a closer ally to cod-‘Post-Modernism’ or to the most problematic elements of ‘New Age’ thinking than to that of the Enlightenment…

Libertarianism likes to present itself as a philosophy or ideology that is rigorously objective. Wedded to the truth, and rationality. Ayn Rand called her cod-philosophy ‘Objectivism’. Tibor Machan and other well-known libertarian philosophers today place a central emphasis on reason as their guide. Libertarians like to think that they are honest, where others aren’t, about ‘human nature’ (it’s thoroughly selfish), and like to claim that there is something self-deceptive or propagandistically dishonest about socialism, ecologism and other rival philosophies. Without its central claim to hard-nosed objectivity, truth and rationality, libertarianism would be nothing.

But this central commitment is in profound tension with the libertarian commitment, equally absolute, to ‘liberty’. For truth, truths, truthfulness, rationality, objectivity, impose a ‘constraint’. A massive utterly implacable constraint, on one’s license to do and believe and think whatever one wants. One cannot be Carroll’s Humpty Dumpty in a world of truth and reason. One cannot intelligibly think that freedom of thought requires complete license, or that moral freedom requires complete individual license, in such a world.

The dilemma of the libertarian was already laid bare in the progress of the thinking of a hero of some libertarians, Friedrich Nietzsche, in the great third and final essay of his masterpiece THE GENEALOGY OF MORALITY. Nietzsche can appear on a superficial reading of that essay to be endorsing a kind of artistic disregard for truth; it turns out, as the essay follows its remarkable course, that this is far from so; in fact, it is the opposite of the truth. In the end, taking further a line of thought that he began in the great fifth book of THE GAY SCIENCE, Nietzsche lines up as a fanatical advocate of truth: he speaks of drawing the hard consequences of being no longer willing to accept the lie of theism, and of “we godless metaphysicians” as the true heirs of Plato: “Even we seekers after knowledge today”, Nietzsche writes, “we godless anti-metaphysicians still take our fire, too, from the flame lit by a faith that is thousands of years old, that Christian faith which was also the faith of Plato, that God is the truth, that truth is divine.”

He contrasts his stance with that of the legendary Assassins, who held that “Nothing is true, [and therefore] everything is permitted”. He admires their ambition, but absolutely cannot find himself able to simply agree with what they said.

Contemporary libertarianism is stuck in a completely cleft stick: stuck wanting to agree with Nietzsche’s considered position and yet wanting to endorse something like the Assassins’ creed too. Libertarianism, centred as its name makes plain on the notion of ‘complete’ individual freedom, inevitably runs up, sooner or later, against ‘shackles’: the limits imposed on one’s thought and action by adherence to truth. (Acknowledging the truth of human-induced dangerous climate change is only the most obvious case of this; there are many many others. )

This explains the extraordinary and pitiful sight of so many libertarians finding themselves attracted to climate-denial and similarly pathetic evasions of the absolute ‘constraint’ that truth and rationality force upon anyone and everyone who is prepared to face the truth, at the present time. Such denial is over-determined. Libertarians have various strong motivations for not wanting to believe in the ecological limits to growth: such limits often recommend state-action / undermine the profitability of some out-of-date businesses (e.g. coal and fracking companies) that fund some libertarian-leaning thinktank-work. Limits undermine the case for deregulation. The limits to growth evince a powerful case in point of the need for a fundamentally precautious outlook: anathema to the reckless Promethean fantasies that animate much libertarianism. Furthermore: Libertarianism depends for its credibility on our being able to determine what individuals’ rights are, and to separate out individuals completely from one another. Our massive inter-dependence as social animals in a world of ecology (even more so, actually, in an internationalised and networked world, of course) undermines this, by making for example our responsibility for pollution a profoundly complex matter of inter-dependence that flies in the face of silly notions of being able to have property-rights in everything (Are we supposed to be able to buy and sell quotas in cigarette-smoke?: Much easier to deny that passive smoking causes cancer.). Above all though: libertarians can’t stand to be told that they don’t have as much epistemic right as anyone else on any topic that they like to think they understand or have some ‘rights’ in relation to: “Who are you to tell me that I have to defer to some scientist?”

This then reaches the nub of the issue, and explains the truly-tragic spectacle of someone like Jamie Whyte — a critical thinking guru who made his name as a hardline advocate of truth, objectivity and rationality arguing (quite rightly, and against the current of our time, insofar as that current is consumeristic, individualistic, and (therefore) relativistic/subjectivistic) that no-one has an automatic right to their own opinion (You have to earn that right, through knowledge or evidence or good reasoning or the like) — becoming a climate-denier. His libertarian love for truth and reason has finally careened — crashed — right into and up against a limit: his libertarian love for (big business / the unfettered pursuit of Mammon and, more important still) having the right to — the freedom to — his own opinion, no matter what. A lover of truth and reason, driven to deny the most crucial truth about the world today (that pollution is on the verge of collapsing our civilisation); his subjectivising of everything important turning finally to destroying his love for truth itself. . . Truly a tragic spectacle. Or perhaps we should say: truly farcical.

The remarkable irony here is that libertarianism, allegedly congenitally against ‘political correctness’ and other post-modern fads, allegedly a staunch defender of the Enlightenment against the forces of unreason, has itself become the most ‘Post-Modern’ of doctrines. A new, extreme form of individualised relativism; an unthinking product of (the worst element of) its/our time (insofar as this is a time of ‘self-realization’, and ultimately of license). Libertarianism, including the perverse and deadly denial of ecological constraints, is, far from being a crusty enemy of the ‘New Age’, in this sense the ultimate bastard child of the 1960s.

To sum up. Libertarianism was founded on the love for truth and reason; but it is founded also, of course, on the inviolability of the individual. Taken to its ‘logical’ conclusion, truth itself is (felt as) an ‘imposition’ on the individual. The sovereign liberty of the self, in libertarianism, is at ineradicable odds with the willingness to accept ‘others” truths. And it is the former, sadly, which tends to win out. For, as we have seen, the denial, by libertarians, of elementary contemporary scientific truths such as that of the theory of greenhouse-gas-heat-build-up, is over-determined. When truth clashes with a dogmatic insistence on one’s own complete’ freedom of mental and physical manouevre, and with profit; when the truth is that we are going to have to rein in some of our appetites if we are to bequeath a habitable world to our children’s children…then the truth is: that truth itself is an obstacle easily overcome, by the will of weak only-too-human libertarians.

The obsession of libertarians with individual liberty crowds out the value of truth. In the end, their thinking becomes voluntaristic and contrarian for the sake of it. They end up believing simply what they WANT to believe. And, as explained above, they don’t WANT to accept the truths of ecology, of climate science, etc. . And so they deny them.

As Wittgenstein famously remarked: the real difficulty in philosophy is one of the will, more even than of the intellect. What is hard is to will oneself to accept things that are true that one doesn’t want to believe, and moreover that (in the case of some on the ‘hard’ Right) one’s salary or one’s stock-options or one’s ability to live with oneself depend on one not believing.

It takes strength, fibre, it takes a truly philosophical sensibility — it takes a willingness to understand that intellectual autonomy in its true sense essentially requires submission to reality — to be able to acknowledge the truth; rather than to deny it.

Ad Baculum, Racism & Sexism

Opposition poster for the 1866 election. Geary...

(Photo credit: Wikipedia)

I was asked to write a post about the ad baculum in the context of sexism and racism. To start things off, an ad baculum is a common fallacy that, like most common fallacies, goes by a variety of names. This particular fallacy is also known as appeal to fear, appeal to force and scare tactics. The basic idea is quite straightforward and the fallacy has a simple form:

Premise: Y is presented (a claim that is intended to produce fear).

Conclusion:  Therefore claim X is true (a claim that is generally, but need not be, related to Y in some manner).

 

This line of “reasoning” is fallacious because creating fear in people (or threatening them) does not constitute evidence that a claim is true. This tactic can be rather effective as a persuasive device since fear can be an effective motivator for belief. But, there is a distinction between a logical reason to accept a claim as true and a motivating reason to believe that a claim is true.

Like all fallacies, ad baculums will serve any master, so they can be employed as a device in “support” of any claim. In the days when racism and sexism were rather more overt in America, ad baculums were commonly employed in the hopes of motivating people to accept (or at least not oppose) racism and sexism. Naturally, the less subtle means of direct threats and physical violence (up to and including murder) were deployed as well.

In the United States of 2014, overt racism and sexism are regarded as unacceptable and those who make racist or sexist claims sometimes find themselves the object of public disapproval. In some cases, making such claims can cost a person his job.

In some cases, it will be claimed that the claims were not actually racist or sexist. In other cases, the racism or sexism will not be denied, but an appeal will be made to freedom of expression and concerns will be raised that a person is being denied his rights when he is subject to a backlash for remarks that some might regard as racist or sexist.

Given that people are sometimes subject to negative consequences for making claims that are seen by some as racist or sexist, it is not unreasonable to consider that ad baculums are sometimes deployed to limit free expression. That is, that the threat of some sort of retaliation is used to persuade people to accept certain claims. Or, at the very least, used in an attempt to silence people.

It is rather important to be clear about an important distinction between an appeal to fear (using fear to get people to believe) and there being negative consequences for a person’s actions. For example, if someone says “you know, young professor, that we carefully consider a person’s view on race and sex before granting tenure…so I certainly hope that you are with us in your beliefs and actions”, then that is an appeal to fear: the young professor is supposed to agree with her colleagues and believe that claims are true because she has been threatened. But, if a young professor realizes that she will fired for yelling things like “go back to England, white devil honkey crackers male-pigs” at her white male students and elects not to do so, she is not a victim of an appeal to fear. To use another example, if I refrain from shouting obscenities at the Dean because I would rather not be fired, I am not a victim of ad baculum. As a final example, if I decide not to say horrible things about my friends because I know that they would reconsider their relationship to me, then I am not a victim of an ad baculum. As such, an ad baculum is not that a person faces potential negative consequences for saying things, it is that a person is supposed to accept a claim as true on the basis of “evidence” that is merely a threat or something intended to create fear. As such, the fact that making claims that could be taken as sexist or racist could result in negative consequences does not entail that anyone is a victim of ad baculum in this context.

What some people seem to be worried about is the possibility of a culture of coercion (typically regarded as leftist) that aims at making people conform to a specific view about sex and race. If there were such a culture or system of coercion that aimed at making people accept claims about race and gender using threats as “evidence”, then there would certainly be ad baculums being deployed.

I certainly will not deny that there are some people who do use ad baculums to try to persuade people to believe claims about sex and race. However, there is the reasonable question of how much this actually impacts discussions of race and gender. There is, of course, the notion that the left has powerful machinery in place to silence dissent and suppress discussions of race and sex that deviate from their agenda. There is also the notion that this view is a straw man of the reality of the situation.

One point of reasonable concern is considering the distinction between views that can be legitimately regarded as warranting negative consequences (that is, a person gets what she deserves for saying such things) and views that should be seen as legitimate points of view, free of negative consequences. For example, if I say that you are an inferior being who is worthy only of being my servant and unworthy of the rights of a true human, then I should certainly expect negative consequences and would certainly deserve some of them.

Since I buy into freedom of expression, I do hold that people should be free to express views that would be regarded as sexist and racist. However, like J.S. Mill, I also hold that people are subject to the consequences of their actions. So, a person is free to tell us one more thing he knows about the Negro, but he should not expect that doing so will be free of consequences.

There is also the way in which such views are considered. For example, if I were to put forth a hypothesis about gender role for scientific consideration and was willing to accept the evidence for or against my hypothesis, then this would be rather different than just insisting that women are only fit for making babies and sandwiches. Since I believe in freedom of inquiry, I accept that even hypotheses that might be regarded as racist or sexist should be given due consideration if they are properly presented and tested according to rigorous standards. For example, some claim that women are more empathetic and even more ethical than men. While that might seem like a sexist view, it is a legitimate point of inquiry and one that can be tested and thus confirmed or disconfirmed. Likewise, the claim that men are better suited for leadership might seem like a sexist view, it is also a legitimate point of inquiry and one that can presumably be investigated. As a final example, inquiring whether or not men are being pushed out of higher education is also a matter of legitimate inquiry—and one I have pursued.

If someone is merely spewing hate and nonsense, I am not very concerned if he gets himself into trouble. After all, actions have consequences. However, I am concerned about the possibility that scare tactics might be used to limit freedom of expression in the context of discussions about race and sex. The challenge here is sorting between cases of legitimate discussion/inquiry and mere racism or sexism.

As noted above, I have written about the possibility of sexism against men in current academics—but I have never been threatened and no attempt has been made to silence me. This might well be because my work never caught the right (or wrong) eyes or it might be because my claims are made as a matter of inquiry and rationally argued. Because of my commitment to these values, I am quite willing to consider examples of cases where sensible and ethical people have attempted to engage in rational and reasonable discussion or inquiry in regards to race or sex and have been subject to attempts to silence them. I am sure there are examples and welcome their inclusion in the comments section.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Defining Our Gods

The theologian Alvin Plantinga was interviewed for The Stone this weekend, making the claim that Atheism is Irrational. His conclusion, however, seems to allow that agnosticism is pretty reasonable, and his thought process is based mostly on the absurdity of the universe and the hope that some kind of God will provide an explanation for whatever we cannot make sense of. These attitudes seem to me to require that we clarify a few things.

There are a variety of different intended meanings behind the word “atheist” as well as the word “God”. I generally make the point that I am atheistic when it comes to personal or specific gods like Zeus, Jehovah, Jesus, Odin, Allah, and so on, but agnostic if we’re talking about deism, that is, when it comes to an unnamed, unknowable, impersonal, original or universal intelligence or source of some kind. If this second force or being were to be referred to as “god” or even spoken of through more specific stories in an attempt to poetically understand some greater meaning, I would have no trouble calling myself agnostic as Plantinga suggests. But if the stories or expectations for afterlife or instructions for communications are meant to be considered as concrete as everyday reality, then I simply think they are as unlikely as Bigfoot or a faked moon landing – in other words, I am atheistic.

There are atheists who like to point out that atheism is ultimately a lack of belief, and therefore as long as you don’t have belief, you are atheistic – basically, those who have traditionally been called agnostics are just as much atheists. The purpose of this seems to be to expand the group of people who will identify more strongly as non-believers, and to avoid nuance – or what might be seen as hesitation – in self-description.

However, this allows for confusion and unnecessary disagreement at times. I think in fact that there are a fair number of people who are atheistic when it comes to very literal gods, like the one Ken Ham was espousing in his debate with Bill Nye. Some people believe, as Ken Ham does, that without a literal creation, the whole idea of God doesn’t make sense, and so believe in creationism because they believe in God. Some share this starting point, but are convinced by science and conclude there is no god. But others reject the premise and don’t connect their religious positions with their understandings of science. It’s a popular jab among atheists that “everyone is atheistic when it comes to someone else’s gods”, but it’s also a useful description of reality. We do all choose to not believe certain things, even if we would not claim absolute certainty.

Plenty of us would concede that only math or closed systems can be certain, so it’s technically possible that any conspiracy theory or mythology at issue is actually true – but still in general it can be considered reasonable not to believe conspiracy theories or mythologies. And if one includes mainstream religious mythologies with the smaller, less popular, less currently practiced ones, being atheistic about Jesus (as a literal, supernatural persona) is not that surprising from standard philosophical perspectives. The key here is that the stories are being looked at from a materialistic point of view – as Hegel pointed out, once spirituality is asked to compete in an empirical domain, it has no chance. It came about to provide insight, meaning, love and hope – not facts, proof, and evidence.

The more deeply debatable issue would be a broadly construed and non-specific deistic entity responsible for life, intelligence or being. An argument can be made that a force of this kind provides a kind of unity to existence that helps to make sense of it. It does seem rather absurd that the universe simply happened, although I am somewhat inclined to the notion that the universe is just absurd. On the other hand, perhaps there is a greater order that is not always evident. I would happily use the word agnostic to describe my opinion about this, and the philosophical discussion regarding whether there is an originating source or natural intelligence to being seems a useful one. However, it should not be considered to be relevant to one’s opinion about supernatural personas who talk to earthlings and interfere in their lives.

There are people who identify as believers who really could be categorized as atheistic in the same way I am about the literal versions of their gods. They understand the stories of their religions as pathways to a closer understanding of a great unspecified deity, but take them no more literally than Platonists take the story of the Cave, which is to say, the stories are meant to be meaningful and the concrete fact-based aspect is basically irrelevant. It’s not a question of history or science: it’s metaphysics. Let’s not pretend any of us know the answer to this one.

Picking between Studies

Illustration of swan-necked flask experiment u...

(Photo credit: Wikipedia)

In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.

Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.

In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.

Sample Size (Control group + Experimental Group)

Approximate Figure That The Difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
100 13
500 6
1,000 4
1,500 3

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.

Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.

Overall, here are some key questions to ask when picking a study:

Was the study/experiment properly conducted?

Was the sample size large enough?

Were the results statistically significant?

Were those conducting the study/experiment experts?

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Picking between Experts

A logic diagram proposed for WP OR to handle a...

A logic diagram proposed for WP OR to handle a situation where two equal experts disagree. (Photo credit: Wikipedia)

One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.

If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.

The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise.  For example, a person might be very likeable, but not know a thing about what they are talking about.

Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.

 

1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

2. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).

 

3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.

It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

4. The person in question is not significantly biased.

This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.

It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.

Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Hyperbole, Again

English: Protesters at the Taxpayer March on W...

(Photo credit: Wikipedia)

Hyperbole is a rhetorical device in which a person uses an exaggeration or overstatement in order to create a negative or positive feeling. Hyperbole is often combined with a rhetorical analogy. For example, a person might say that someone told “the biggest lie in human history” in order to create a negative impression. It should be noted that not all vivid or extreme language is hyperbole-if the extreme language matches the reality, then it is not hyperbole. So, if the lie was actually the biggest lie in human history, then it would not be hyperbole to make that claim.

People often make use of hyperbole when making rhetorical analogies/comparisons. A rhetorical analogy involves comparing two (or more) things in order to create a negative or positive impression.  For example, a person might be said to be as timid as a mouse or as smart as Einstein. By adding in hyperbole, the comparison can be made more vivid (or possibly ridiculous). For example, a professor who assigns a homework assignment that is due the day before spring break might be compared to Hitler. Speaking of Hitler, hyperbole and rhetorical analogies are stock items in political discourse.

Some Republicans have decided that Obamacare is going to be their main battleground. As such, it is hardly surprising that they have been breaking out the hyperbole in attacking it. Dr. Ben Carson launched an attack by seeming to compare Obamacare to slavery, but the response to this led him to “clarify” his remarks to mean that he thinks Obamacare is not like slavery, but merely the worst thing to happen to the United States since slavery. This would, of course, make it worse than all the wars, the Great Depression, 9/11 and so on.

While he did not make a slavery comparison, Ted Cruz made a Nazi comparison during his filibuster. As Carson did, Cruz and his supporters did their best to “clarify” the remark.

Since slavery and Nazis had been taken, Rick Santorum decided to use the death of Mandela as an opportunity to compare Obamacare to Apartheid.

When not going after Obamacare, Obama himself is a prime target for hyperbole. John McCain, who called out Cruz on his Nazi comparison, could not resist making use of some Nazi hyperbole in his own comparison. When Obama shook Raul Castro’s hand, McCain could not resist comparing Obama to Chamberlain and Castro to Hitler.

Democrats and Independents are not complete strangers to hyperbole, but they do not seem to wield it quite as often (or as awkwardly) as Republicans. There have been exceptions, of course-the sweet allure of a Nazi comparison is bipartisan.  However, my main concern here is not to fill out political scorecards regarding hyperbole. Rather, it is to discuss why such uses of negative hyperbole are problematic.

One point of note is that while hyperbole can be effective at making people feel a certain way (such as angry), its use often suggests that the user has little in the way of substance. After all, if something is truly bad, then there would seem to be no legitimate need to make exaggerated comparisons. In the case of Obamacare, if it is truly awful, then it should suffice to describe its awfulness rather than make comparisons to Nazis, slavery and Apartheid. Of course, it would also be fair to show how it is like these things. Fortunately for America, it is obviously not like them.

One point of moral concern is the fact that making such unreasonable comparisons is an insult to the people who suffered from or fought against such evils. After all, such comparisons transform such horrors as slavery and Apartheid into mere rhetorical chips in the latest political game. To use an analogy, it is somewhat like a person who has played Call of Duty comparing himself to combat veterans of actual wars. Out of respect for those who suffered from and fought against these horrors, they should not be used so lightly and for such base political gameplay.

From the standpoint of critical thinking, such hyperbole should be avoided because it has no logical weight and serves to confuse matters by playing on the emotions. While that is the intent of hyperbole, this is an ill intent. While rhetoric does have its legitimate place (mainly in making speeches less boring) such absurd overstatements impede rather than advance rational discussion and problem solving.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta