Tag Archives: fallacy

Philosophy & My Old Husky I: Post Hoc & Anecdotal Evidence

dogpark065My Siberian husky, Isis, joined the pack in 2004 at the age of one. It took her a little while to realize that my house was now her house—she set out to chew all that could be chewed, presumably as part of some sort of imperative of destruction. Eventually, she came to realize that she was chewing her stuff—or so I like to say. More likely, joining me on 8-16 mile runs wore the chew out of her.

As the years went by, we both slowed down. Eventually, she could no longer run with me (despite my slower pace) and we went on slower adventures (one does not walk a husky; one goes adventuring with a husky). Despite her advanced age, she remained active—at least until recently. After an adventure, she seemed slow and sore. She cried once in pain, but then seemed to recover. Then she got worse, requiring a trip to the emergency veterinarian (pets seem to know the regular vet hours and seem to prefer their woes to take place on weekends).

The good news was that the x-rays showed no serious damage—just indication of wear and tear of age. She also had some unusual test results, perhaps indicating cancer. Because of her age, the main concern was with her mobility and pain—as long as she could get about and be happy, then that was what mattered. She was prescribed an assortment of medications and a follow up appointment was scheduled with the regular vet. By then, she had gotten worse in some ways—her right foot was “knuckling” over, making walking difficult. This is often a sign of nerve issues. She was prescribed steroids and had to go through a washout period before starting the new medicine. As might be imagined, neither of us got much sleep during this time.

While all stories eventually end, her story is still ongoing—the steroids seemed to have done the trick. She can go on slow adventures and enjoys basking in the sun—watching the birds and squirrels, willing the squirrels to fall from the tree and into her mouth.

While philosophy is often derided as useless, it was actually very helpful to me during this time and I decided to write about this usefulness as both a defense of philosophy and, perhaps, as something useful for others who face similar circumstances with an aging canine.

Isis’ emergency visit was focused on pain management and one drug she was prescribed was Carprofen (more infamously known by the name Rimadyl). Carprofen is an NSAID that is supposed to be safer for canines than those designed for humans (like aspirin) and is commonly used to manage arthritis in elderly dogs. Being a curious and cautious sort, I researched all the medications (having access to professional journals and a Ph.D.  is handy here). As is often the case with medications, I ran across numerous forums which included people’s sad and often angry stories about how Carprofen killed their pets. The typical story involved what one would expect: a dog was prescribed Carprofen and then died or was found to have cancer shortly thereafter. I found such stories worrisome and was concerned—I did not want my dog to be killed by her medicine. But, I also knew that without medication, she would be in terrible pain and unable to move. I wanted to make the right choice for her and knew this would require making a rational decision.

My regular vet decided to go with the steroid option, one that also has the potential for side effects—complete with the usual horror stories on the web. Once again, it was a matter of choosing between the risks of medication and the consequences of doing without. In addition to my research into the medication, I also investigated various other options for treating arthritis and pain in older dogs. She was already on glucosamine (which might be beneficial, but seems to have no serious side effects), but the web poured forth an abundance of options ranging from acupuncture to herbal remedies. I even ran across the claim that copper bracelets could help pain in dogs.

While some of the alternatives had been subject to actual scientific investigation, the majority of the discussions involved a mix of miracle and horror stories. One person might write glowingly about how an herbal product brought his dog back from death’s door while another might claim that after he gave his dog the product, the dog died because of it. Sorting through all these claims, anecdotes and studies turned out to be a fair amount of work. Fortunately, I had numerous philosophical tools that helped a great deal with such cases, specifically of the sort where it is claimed that “I gave my dog X, then he got better/died and X was the cause.” Knowing about two common fallacies is very useful in these cases.

The first is what is known as Post Hoc Ergo Propter Hoc (“after this, therefore because of this”).  This fallacy has the following form:

  1. A occurs before B.
  2. Therefore A is the cause of B.

This fallacy is committed when it is concluded that one event causes another simply because the proposed cause occurred before the proposed effect. More formally, the fallacy involves concluding that A causes or caused B because A occurs before B and there is not sufficient evidence to actually warrant such a claim.

While cause does precede effect (at least in the normal flow of time), proper causal reasoning, as will be discussed in an upcoming essay, involves sorting out whether A occurring before B is just a matter of coincidence or not. In the case of medication involving an old dog, it could entirely be a matter of coincidence that the dog died or was diagnosed with cancer after the medicine was administered. That is, the dog might have died anyway or might have already had cancer. Without a proper investigation, simply assuming that the medication was the cause would be an error. The same holds true for beneficial effects. For example, a dog might go lame after a walk and then recover after being given an herbal supplement for several days. While it would be tempting to attribute the recovery to the herbs, they might have had no effect at all. After all, lameness often goes away on its own or some other factor might have been the cause.

This is not to say that such stories should be rejected out of hand—it is to say that they should be approached with due consideration that the reasoning involved is post hoc. In concrete terms, if you are afraid to give your dog medicine she was prescribed because you heard of cases in which a dog had the medicine and then died, you should investigate more (such as talking to your vet) about whether there really is a risk of death. As another example, if someone praises an herbal supplement because her dog perked up after taking it, then you should see if there is evidence for this claim beyond the post hoc situation.

Fortunately, there has been considerable research into medications and treatments that provide a basis for making a rational choice. When considering such data, it is important not to be lured into rejecting data by the seductive power of the Fallacy of Anecdotal Evidence.

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation on hasty generalization.  It has the following forms:

Form One

  1. Anecdote A is told about a member (or small number of members) of Population P.
  2. Conclusion C is drawn about Population P based on Anecdote A.

For example, a person might hear anecdotes about dogs that died after taking a prescribed medication and infer that the medicine is likely to kill dogs.

Form Two

  1. Reasonable statistical evidence S exists for general claim C.
  2. Anecdote A is presented that is an exception to or goes against general claim C.
  3. Conclusion: General claim C is rejected.

For example, the statistical evidence shows that the claim that glucosamine-chondroitin can treat arthritis is, at best, very weakly supported. But, a person might tell a story about how their aging husky “was like a new dog” after she starting getting a daily dose of the supplement. To accept this as proof that the data is wrong would be to fall for this fallacy. That said, I do give my dog glucosamine-chondroitin because it is cheap, has no serious side effects and might have some benefit. I am fully aware of the data and do not reject it—I am gambling that it might do my husky some good.

The way to avoid becoming a victim of anecdotal evidence is to seek reliable, objective statistical data about the matter in question (a vet should be a good source). This can, I hasten to say, can be quite a challenge when it comes to treatments for pets. In many cases, there are no adequate studies or trials that provide statistical data and all the information available is in the form of anecdotes. One option is, of course, to investigate the anecdotes and try to do your own statistics. So, if the majority of anecdotes indicate something harmful (or something beneficial) then this would be weak evidence for the claim. In any case, it is wise to approach anecdotes with due care—a story is not proof.

Threat Assessment I: A Vivid Spotlight

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is, very broadly speaking, the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both of these factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is a very low severity threat. There is an exceedingly low probability that there will be a mass shooting, but it is a high severity threat since it can result in injury or death.

While humans have done a fairly good job at surviving, this seems to have been despite our amazingly bad skills at rational threat assessment. To be specific, the worry people feel in regards to a threat generally does not match up with the actual probability of the threat occurring. People do seem somewhat better at assessing the severity, though they are also often in error about this.

One excellent example of poor threat assessment is in regards to the fear Americans have in regards to domestic terrorism. As of December 15, 2015 there have been 45 people killed in the United States in attacks classified as “violent jihadist attacks” and 48 people killed in attacks classified as “far right wing attacks” since 9/11/2001.  In contrast, there were 301,797 gun deaths from 2005-2015 in the United States and over 30,000 people are killed each year in motor vehicle crashes in the United States.

Despite the incredibly low likelihood of a person being killed by an act of terrorism in the United States, many people are terrified by terrorism (which is, of course, the goal of terrorism) and have become rather focused on the matter since the murders in San Bernardino. Although there have been no acts of terrorism on the part of refugees in the United States, many people are terrified of refugees and this had led to calls for refusing to accept Syrian refugees and Donald Trump has famously called for a ban on all Muslims entering the United States.

Given that an American is vastly more likely to be killed while driving than killed by a terrorist, it might be wondered why people are so incredibly bad at this sort of threat assessment. The answer, in regards to having fear vastly out of proportion to the probability is easy enough—it involves a cognitive bias and some classic fallacies.

People follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use proper statistical methods, people generally fall victim to the bias known as the availability heuristic. The idea is that a person unconsciously assigns a probability to something based on how often they think of that sort of event. While an event that occurs often will tend to be thought of often, the fact that something is often thought of does not make it more likely to occur.

After an incident of domestic terrorism, people think about terrorism far more often and thus tend to unconsciously believe that the chance of terrorism occurring is far higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is incredibly low (driving to the beach is vastly more likely to kill you than a shark is). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, very difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite it is usually the worst evidence.

People are also misled about probability by various fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain class or type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention is focused on that fact, leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most Christians are terrorists because the media covered a terrorist who was Christian (who shot up a Planned Parenthood). If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a domestic terrorist attack by Muslims.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. This fallacy is similar to hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.

People often fall victim to this fallacy because stories and anecdotes tend to have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of said anecdote. Not surprisingly, people most commonly accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of domestic terrorism or tell the story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring. For example, people point to the claim that one of the terrorists in Paris masqueraded as a refugee and infer that refugees pose a great threat to the United States. Or they tell the story about the one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the visa system (which is used by about 25,000 people a year).

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is particularly vivid or dramatic does not make the event more likely to occur, especially in the face of significant statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases tend to make a very strong impression on the human mind. For example, mass shootings by domestic terrorists are vivid and awful, so it is hardly surprising that people feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more a person feels that it will occur.

It should be kept in mind that taking into account the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because the effects of an accident can be very, very dramatic. If he knows that, statistically, the chances of the accident are happening are very low but he considers even a small risk to be unacceptable, then he would not be making this error in reasoning. This then becomes a matter of value judgment—how much risk is a person willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is rather important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter a miniscule threat while spending little on leading causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating fear that is unfounded. In addition to the psychological harm to individuals, there is also the damage to the social fabric. There has already been an increase in attacks on Muslims in America and people are seriously considering abandoning core American values, such as the freedom of religion and being good Samaritans.

In light of the above, I urge people to think rather than feel their way through their concerns about terrorism. Also, I urge people to stop listening to Donald Trump. He has the right of free expression, but people also have the right of free listening.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The “Two Bads” Fallacy & Racism

The murder of nine people in the Emanuel AME Church in South Carolina ignited an intense discussion of race and violence. While there has been near-universal condemnation of the murders, some people take effort to argue that these killings are part of a broader problem of racism in America. This claim is supported by reference to the well-known history of systematic violence against blacks in America as well as consideration of data from today. Interestingly, some people respond to this approach by asserting that more blacks are killed by blacks than by whites. Some even seem obligated to add the extra fact that more whites are killed by blacks than blacks are killed by whites.

While these points are often just “thrown out there” without being forged into part of a coherent argument, presumably the intent of such claims is to somehow disprove or at least diminish the significance of claims regarding violence against blacks by whites. To be fair, there might be other reasons for bringing up such claims—perhaps the person is engaged in an effort to broaden the discussion to all violence out of a genuine concern for the well-being of all people.

In cases in which the claims about the number of blacks killed by blacks are brought forth in response to incidents such as the church shooting, this tactic appears to be a specific form of a red herring. This fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic.

This sort of “reasoning” has the following form:

  1. Topic A is under discussion.
  2. Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
  3. Topic A is abandoned.

In the case of the church shooting, the pattern would be as follows:

  1. The topic of racist violence against blacks is being discussed, specifically the church shooting.
  2. The topic of blacks killing other blacks is brought up.
  3. The topic of racist violence against blacks is abandoned in favor of focusing on blacks killing other blacks.


This sort of “reasoning” is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim. In the specific case at hand, switching the topic to black on black violence does nothing to address the topic of racist violence against blacks.

While the red herring label would certainly suffice for these cases, it is certainly appealing to craft a more specific sort of fallacy for cases in which something bad is “countered” by bringing up another bad. The obvious name for this fallacy is the “two bads fallacy.” This is a fallacy in which a second bad thing is presented in response to a bad thing with the intent of distracting attention from the first bad thing (or with the intent of diminishing the badness of the first bad thing).

This fallacy has the following pattern:

  1. Bad thing A is under discussion.
  2. Bad thing B is introduced under the guise of being relevant to A (when B is actually not relevant to A in this context).
  3. Bad thing A is ignored, or the badness of A is regarded as diminished or refuted.

In the case of the church shooting, the pattern would be as follows:

  1. The murder of nine people in the AME church, which is bad, is being discussed.
  2. Blacks killing other blacks, which is bad, is brought up.
  3. The badness of the murder of the nine people is abandoned, or its badness is regarded as diminished or refuted.

This sort of “reasoning” is fallacious because the mere fact that something else is bad does not entail that another bad thing thus has its badness lessened or refuted. After all, the fact that there are worse things than something does not entail that it is not bad. In cases in which there is not an emotional or ideological factor, the poorness of this reasoning is usually evident:

Sam: “I broke my arm, which is bad.”
Bill: “Well, some people have two broken arms and two broken legs.”
Joe: “Yeah, so much for your broken arm being bad. You are just fine. Get back to work.”

What seems to lend this sort of “reasoning” some legitimacy is that comparing two things that are bad is relevant to determining relative badness. If a person is arguing about how bad something is, it is certainly reasonable to consider it in the context of other bad things. For example, the following would not be fallacious reasoning:

Sam: “I broke my arm, which is bad.”
Bill: “Some people have two broken arms and two broken legs.”
Joe: “That is worse than one broken arm.”
Sam: “Indeed it is.”
Joe: “But having a broken arm must still suck.”
Sam: “Indeed it does.”

Because of this, it is important to distinguish between cases of the fallacy (X is bad, but Y is also bad, so X is not bad) and cases in which a legitimate comparison is being made (X is bad, but Y is worse, so X is less bad than Y, but still bad).

Factions & Fallacies


In general, human beings readily commit to factions and then engage in very predictable behavior: they regard their own factions as right, good and truthful while casting opposing factions as wrong, evil and deceitful. While the best known factions tend to be political or religious, people can form factions around almost anything, ranging from sports teams to video game consoles.

While there can be rational reasons to form and support a faction, factionalism tends to be fed and watered by cognitive biases and fallacies. The core cognitive bias of factionalism is what is commonly known as in group bias. This is the psychology tendency to easily form negative views of those outside of the faction. For example, Democrats often regard Republicans in negative terms, casting them as uncaring, sexist, racist and fixated on money. In turn, Republicans typically look at Democrats in negative terms and regard them as fixated on abortion, obsessed with race, eager to take from the rich, and desiring to punish success. This obviously occurs outside of politics as well, with competing religious groups regarding each other as heretics or infidels. It even extends to games and sports, as the battle of #gamergate serving as a nice illustration.

The flip side of this bias is that members of a faction regard their fellows and themselves in a positive light and are thus inclined to attribute to themselves positive qualities. For example, Democrats see themselves as caring about the environment and being concerned about social good. As another example, Tea Party folks cast themselves as true Americans who get what the founding fathers really meant.

This bias is often expressed in terms of and fuelled by stereotypes. For example, critics of the sexist aspects of gaming will make use of the worst stereotypes of male gamers (dateless, pale misogynists who spew their rage around a mouthful of Cheetos). As another example, Democrats will sometimes cast the rich as being uncaring and out of touch plutocrats. These stereotypes are sometimes taken the extreme of demonizing: presenting the other faction members as not merely wrong or bad but evil to the extreme.

Such stereotypes are easy to accept and many are based on another bias, known as a fundamental attribution error. This is a psychological tendency to fail to realize that the behavior of other people is as much limited by circumstances as our behavior would be if we were in their shoes. For example, a person who was born into a well off family and enjoyed many advantages in life might fail to realize the challenges faced by people who were not so lucky in their birth. Because of this, she might demonize those who are unsuccessful and attribute their failure to pure laziness.

Factionalism is also strengthened by various common fallacies. The most obvious of these is the appeal to group identity. This fallacy occurs when a person accepts her pride in being in a group as evidence that a claim is true. Roughly put, a person believes it because her faction accepts it as true. The claim might actually be true, the mistake is that the basis of the belief is not rational. For example, a devoted environmentalist might believe in climate change because of her membership in that faction rather than on the basis of evidence (which actually does show that climate change is occurring). This method of belief “protects” group members from evidence and arguments because such beliefs are based on group identity rather than evidence and arguments. While a person can overcome this fallacy, faction-based beliefs tend to only change when the faction changes or if the person leaves the faction.

The above-mentioned biases also tend to lean people towards fallacious reasoning. The negative biases tend to motivate people to accept straw man reasoning, which is when a when a person simply ignores a person’s actual position and substitutes a distorted, exaggerated or misrepresented version of that position. Politicians routinely make straw men out of the views they oppose and their faction members typically embrace these. The negative biases also make ad hominem fallacies common. An ad homimen is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). For example, opponents of a feminist critic of gaming might reject her claims by claiming that she is only engaged in the criticism so as to become famous and make money. While it might be true that she is doing just that, this does not disprove her claims. The guilt by association fallacy, in which a person rejects a claim simply because it is pointed out that people she dislikes accept the claim, both arises from and contributes to factionalism.

The negative views and stereotypes are also often fed by fallacies that involve poor generalizations. One is misleading vividness, a fallacy in which a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. For example, a person in a faction holding that gamers are violent misogynists might point to the recent death threats against a famous critic of sexism in games as evidence that most gamers are violent misogynists. Misleading vividness is, of course, closely related to hasty generalization, a fallacy in which a person draws a conclusion about a population based on a sample that is not large enough to justify that conclusion. For example, a Democrat might believe that all corporations are bad based on the behavior of BP and Wal-Mart. Biased generalizations also occur, which is a fallacy that is committed when a person draws a conclusion about a population based on a sample that is biased or prejudiced in some manner. This tends to be fed by the confirmation bias—the tendency people have to seek and accept evidence for their view while avoiding or ignoring evidence against their view. For example, a person might hold that his view that the poor want free stuff for nothing from visits to web sites that feature Youtube videos selected to show poor people expressing that view.

The positive biases also contribute to fallacious reasoning, often taking the form of a positive ad hominem. A positive ad hominem occurs when a claim is accepted on the basis of some irrelevant fact about the author or person presenting the claim or argument. Typically, this fallacy involves two steps. First, something positive (but irrelevant) about the character of person making the claim, her circumstances, or her actions is made. Second, this is taken to be evidence for the claim in question. For example, a Democrat might accept what Bill Clinton says as being true, just because he really likes Bill.

Nor surprisingly, factionalism is also supported by faction variations on appeals to belief (it is true/right because my faction believes it is so), appeal to common practice (it is right because my faction does it), and appeal to tradition (it is right because my faction has “always done this”).

Factionalism is both fed by and contributes to such biases and poor reasoning. This is not to say that group membership is a bad thing, just that it is wise to be on guard against the corrupting influence of factionalism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ad Baculum, Racism & Sexism

Opposition poster for the 1866 election. Geary...

(Photo credit: Wikipedia)

I was asked to write a post about the ad baculum in the context of sexism and racism. To start things off, an ad baculum is a common fallacy that, like most common fallacies, goes by a variety of names. This particular fallacy is also known as appeal to fear, appeal to force and scare tactics. The basic idea is quite straightforward and the fallacy has a simple form:

Premise: Y is presented (a claim that is intended to produce fear).

Conclusion:  Therefore claim X is true (a claim that is generally, but need not be, related to Y in some manner).


This line of “reasoning” is fallacious because creating fear in people (or threatening them) does not constitute evidence that a claim is true. This tactic can be rather effective as a persuasive device since fear can be an effective motivator for belief. But, there is a distinction between a logical reason to accept a claim as true and a motivating reason to believe that a claim is true.

Like all fallacies, ad baculums will serve any master, so they can be employed as a device in “support” of any claim. In the days when racism and sexism were rather more overt in America, ad baculums were commonly employed in the hopes of motivating people to accept (or at least not oppose) racism and sexism. Naturally, the less subtle means of direct threats and physical violence (up to and including murder) were deployed as well.

In the United States of 2014, overt racism and sexism are regarded as unacceptable and those who make racist or sexist claims sometimes find themselves the object of public disapproval. In some cases, making such claims can cost a person his job.

In some cases, it will be claimed that the claims were not actually racist or sexist. In other cases, the racism or sexism will not be denied, but an appeal will be made to freedom of expression and concerns will be raised that a person is being denied his rights when he is subject to a backlash for remarks that some might regard as racist or sexist.

Given that people are sometimes subject to negative consequences for making claims that are seen by some as racist or sexist, it is not unreasonable to consider that ad baculums are sometimes deployed to limit free expression. That is, that the threat of some sort of retaliation is used to persuade people to accept certain claims. Or, at the very least, used in an attempt to silence people.

It is rather important to be clear about an important distinction between an appeal to fear (using fear to get people to believe) and there being negative consequences for a person’s actions. For example, if someone says “you know, young professor, that we carefully consider a person’s view on race and sex before granting tenure…so I certainly hope that you are with us in your beliefs and actions”, then that is an appeal to fear: the young professor is supposed to agree with her colleagues and believe that claims are true because she has been threatened. But, if a young professor realizes that she will fired for yelling things like “go back to England, white devil honkey crackers male-pigs” at her white male students and elects not to do so, she is not a victim of an appeal to fear. To use another example, if I refrain from shouting obscenities at the Dean because I would rather not be fired, I am not a victim of ad baculum. As a final example, if I decide not to say horrible things about my friends because I know that they would reconsider their relationship to me, then I am not a victim of an ad baculum. As such, an ad baculum is not that a person faces potential negative consequences for saying things, it is that a person is supposed to accept a claim as true on the basis of “evidence” that is merely a threat or something intended to create fear. As such, the fact that making claims that could be taken as sexist or racist could result in negative consequences does not entail that anyone is a victim of ad baculum in this context.

What some people seem to be worried about is the possibility of a culture of coercion (typically regarded as leftist) that aims at making people conform to a specific view about sex and race. If there were such a culture or system of coercion that aimed at making people accept claims about race and gender using threats as “evidence”, then there would certainly be ad baculums being deployed.

I certainly will not deny that there are some people who do use ad baculums to try to persuade people to believe claims about sex and race. However, there is the reasonable question of how much this actually impacts discussions of race and gender. There is, of course, the notion that the left has powerful machinery in place to silence dissent and suppress discussions of race and sex that deviate from their agenda. There is also the notion that this view is a straw man of the reality of the situation.

One point of reasonable concern is considering the distinction between views that can be legitimately regarded as warranting negative consequences (that is, a person gets what she deserves for saying such things) and views that should be seen as legitimate points of view, free of negative consequences. For example, if I say that you are an inferior being who is worthy only of being my servant and unworthy of the rights of a true human, then I should certainly expect negative consequences and would certainly deserve some of them.

Since I buy into freedom of expression, I do hold that people should be free to express views that would be regarded as sexist and racist. However, like J.S. Mill, I also hold that people are subject to the consequences of their actions. So, a person is free to tell us one more thing he knows about the Negro, but he should not expect that doing so will be free of consequences.

There is also the way in which such views are considered. For example, if I were to put forth a hypothesis about gender role for scientific consideration and was willing to accept the evidence for or against my hypothesis, then this would be rather different than just insisting that women are only fit for making babies and sandwiches. Since I believe in freedom of inquiry, I accept that even hypotheses that might be regarded as racist or sexist should be given due consideration if they are properly presented and tested according to rigorous standards. For example, some claim that women are more empathetic and even more ethical than men. While that might seem like a sexist view, it is a legitimate point of inquiry and one that can be tested and thus confirmed or disconfirmed. Likewise, the claim that men are better suited for leadership might seem like a sexist view, it is also a legitimate point of inquiry and one that can presumably be investigated. As a final example, inquiring whether or not men are being pushed out of higher education is also a matter of legitimate inquiry—and one I have pursued.

If someone is merely spewing hate and nonsense, I am not very concerned if he gets himself into trouble. After all, actions have consequences. However, I am concerned about the possibility that scare tactics might be used to limit freedom of expression in the context of discussions about race and sex. The challenge here is sorting between cases of legitimate discussion/inquiry and mere racism or sexism.

As noted above, I have written about the possibility of sexism against men in current academics—but I have never been threatened and no attempt has been made to silence me. This might well be because my work never caught the right (or wrong) eyes or it might be because my claims are made as a matter of inquiry and rationally argued. Because of my commitment to these values, I am quite willing to consider examples of cases where sensible and ethical people have attempted to engage in rational and reasonable discussion or inquiry in regards to race or sex and have been subject to attempts to silence them. I am sure there are examples and welcome their inclusion in the comments section.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Incest Argument & Same-Sex Marriage

Marriage March 2013

(Photo credit: American Life League)

One of the stock fallacious arguments against same sex-marriage is the slippery slope argument in which it is contended that allowing same sex-marriage will lead to allowing incestuous marriage. The mistake being made is, of course, that the link between the two is not actually made. Since the slippery slope fallacy is a fallacy, this is obviously a bad argument.

A non-fallacious argument that is also presented against same sex-marriage involves the contention that allowing same-sex marriage on the basis of a certain principle would require that, on the pain of inconsistency, we also accept incestuous marriage. This principle is typically some variant of the principle that a person should be able to marry any other person. Given that incestuous marriage is bad, this would seem to entail that we should not allow same-sex marriage.

My first standard reply to this argument is that if different-sex marriage does not require us to accept incestuous marriage, then neither does accepting same-sex marriage. But, if accepting same-sex marriage entails that we have to accept incestuous marriage, the same would also apply to different-sex marriage. That this is so is shown by the following argument. If same-sex marriage is based on the principle that a person should be allowed to marry the person they wish to marry, then it would seem that different-sex marriage is based on the principle that a person should be allowed to marry the person of the opposite sex they wish to marry. By analogy, if allowing a person to marry any person they want to marry allows incestuous marriage, then allowing a person to marry a member of the opposite sex would also allow incestuous marriage-albeit only to a member of the opposite sex. But, if the slide to incest can be stopped in the case of different-sex marriage, then the same stopping mechanism can be used in the case of same-sex marriage.

In the case of different-sex marriage, there is generally an injunction against people marrying close relatives. This same injunction would certainly seem to be applicable in the case of same-sex marriage. After all, there is nothing about accepting same-sex marriage that inherently requires accepting incestuous marriage.

One possible objection to my reply is that incestuous different-sex marriage is forbidden on the grounds that such relationships could produce children. More specifically, incestuous reproduction tends to be more likely to produce genetic defects which would provide a basis for a utilitarian moral argument against allowing incestuous marriage.  Obviously, same-sex marriages have no possibility of producing children naturally. This would be a relevant difference between same-sex marriage and different-sex marriage. Thus, it could be claimed that while different-sex marriage can be defended from incestuous marriage on these grounds, the same can not be said for same-sex marriage. Once it is allowed, then it would be unprincipled to deny same-sex-incestuous marriage.

There are four obvious replies here.

First, if the only moral problem with incestuous marriage is the higher  possibility of producing children with genetic defects, then incestuous same-sex marriage would not be morally problematic. Ironically, the relevant difference between the two that prevents denying same-sex-incestuous marriage would also make it morally acceptable.

Second, if a different-sex incestuous couple could not reproduce (due to natural or artificial sterility), then this principle would allow them to get married. After all, they are no more capable of producing children than a same-sex couple.

Third, if it could be shown that a different-sex incestuous couple would have the same chance of having healthy children as a non-incestuous couple, then this would allow them to get married. After all, they are no more likely to produce children with genetic defects than a non-incestuous couple.

Fourth, given that the principle is based on genetic defects being more likely than normal, it would follow that unrelated couples who are lkely to produce offspring with genetic defects should not be allowed to be married. After all, the principle is that couples who are likely to produce genetically defective offspring cannot be married. Thanks to advances in genetics, it is (or soon will be) possible (and affordable) to check the “genetic odds” for couples. As such, if incestuous marriage is wrong because of the higher possibility (whatever the level of unnacceptle risk might be) of genetic defects, then the union of unrelated people who have a higher possibiity of genetically defective children would also be wrong. This would seem to entail that if incestuous marriage should be illegal on these grounds, then so too should the union of unrelated people who have a similar chance of producing defective children.

In light of the above, the incest gambit against same-sex marriage would seem to fail. However, it also seems to follow that incestuous marriage would be acceptable in some cases.

Obviously enough, I have an emotional opposition to incest and believe that it should not be allowed. Of course, how I feel about it is no indication of its correctness or incorrectness. I do, of course, have argments against incest.

Many cases of incest involve a lack of consent, coercion or actual rape. Such cases often involve an older relative having sexual relations with a child. This sort of incest is clearly wrong and arguments for this are easy enough to provide-after all, one can make use of the usual arguments against coercion, child molestation and rape.

Where matters get rather more difficult is incest involving two consenting adults-be they of the same or different sexes. After all, the moral arguments that are based on a lack of consent no longer apply. Appealing to tradition will not work here-after all, that is a fallacy. The claim that it makes me uncomfortable or even sick would also not have any logical weight. As J.S. Mill argued, I have no right to prevent people from engaging in consenual activity just because I think it is offensive. What would be needed would be evidence of harm being done to others without their consent.

I have considered the idea that allowing incestuous marriage would be damaging to family relations. That is, the proper moral relations between relatives is such that incest would be harmful to the family as a whole. This is, obviously enough, analogous to the arguments made by those who oppose same-sex marriage. They argue that allowing same-sex marriage would be damaging to family relations because the proper moral relation between a married couple is such that same-sex marriage would damage to the family as a whole. As it stands, the evidence is that same-sex couples do not create such harm. Naturally, there is not much evidence involving incestuous marriages or relationships. However, if it could be shown that incestuous relationships between consenting adults were harmful, then they could thus be justly forbidden on utilitarian grounds. Naturally, the same would hold true of same-sex relationships.

Reflecting on incestuous marriage has, interestingly enough, given me some sympathy for people who have reflected on same-sex marriage and believe that there is something wrong about it. After all, I am against incestuous marriage and thinking of it makes me feel ill. However, I am at a loss for a truly compelling moral argument against it that would not also apply to non-related couples. My best argument, as I see it, is the harm argument. This is, as noted above, analogous to the harm argument used by opponents of same-sex marriage. The main difference is, of course, that the harm arguments presented by opponents of same sex-marriage have been shown to have premises that are not true. For example, claims about the alleged harms to children from having same-sex parents have been shown to be untrue. As such, I am not against same-sex marriage, but I am opposed to incestuous marriage-be it same or different sexes.

My Amazon Author Page

For Better or Worse Reasoning




Enhanced by Zemanta

76 Fallacies in Print


76 Fallacies is now available in print from Amazon and other fine sellers of books.

In addition to combining the content of my 42 Fallacies and 30 More Fallacies, this book features some revisions as well as a new section on common formal fallacies.

As the title indicates, this book presents seventy six fallacies. The focus is on providing the reader with definitions and examples of these common fallacies rather than being a handbook on winning arguments or general logic.

The book presents the following 73 informal fallacies:

Accent, Fallacy of
Accident, Fallacy of
Ad Hominem
Ad Hominem Tu Quoque
Amphiboly, Fallacy of
Anecdotal Evidence, Fallacy Of
Appeal to the Consequences of a Belief
Appeal to Authority, Fallacious
Appeal to Belief
Appeal to Common Practice
Appeal to Emotion
Appeal to Envy
Appeal to Fear
Appeal to Flattery
Appeal to Group Identity
Appeal to Guilt
Appeal to Novelty
Appeal to Pity
Appeal to Popularity
Appeal to Ridicule
Appeal to Spite
Appeal to Tradition
Appeal to Silence
Appeal to Vanity
Argumentum ad Hitlerum
Begging the Question
Biased Generalization
Burden of Proof
Complex Question
Composition, Fallacy of
Confusing Cause and Effect
Confusing Explanations and Excuses
Circumstantial Ad Hominem
Cum Hoc, Ergo Propter Hoc
Division, Fallacy of
Equivocation, Fallacy of
Fallacious Example
Fallacy Fallacy
False Dilemma
Gambler’s Fallacy
Genetic Fallacy
Guilt by Association
Hasty Generalization
Historian’s Fallacy
Illicit Conversion
Ignoring a Common Cause
Incomplete Evidence
Middle Ground
Misleading Vividness
Moving the Goal Posts
Oversimplified Cause
Overconfident Inference from Unknown Statistics
Pathetic Fallacy
Peer Pressure
Personal Attack
Poisoning the Well
Positive Ad Hominem
Post Hoc
Proving X, Concluding Y
Psychologist’s fallacy
Questionable Cause
Red HerringReification, Fallacy of
Relativist Fallacy
Slippery Slope
Special Pleading
Straw Man
Texas Sharpshooter Fallacy
Two Wrongs Make a Right
Victim Fallacy
Weak Analogy

The book contains the following three formal (deductive) fallacies:

Affirming the Consequent
Denying the Antecedent
Undistributed Middle

Enhanced by Zemanta

A, B, C, D – a fallacy

I don’t know whether this fallacy has a name of its own – I’m sure that Mike LaBossiere can tell us if it does – but how often have you seen somebody argue along the following lines?

P1. X believes A, B, and C.
P2. Y and Z (and others) believe A, B, C, and D.
C. (Therefore) X believes D.

What then happens is that X is criticised for believing D, even though D may be a proposition that X has never argued for, expressly relied upon, or even affirmed. In some cases, D may be some horrible proposition that would suggest X is of bad character if X actually believes it. In other cases, it may merely be something absurd, clearly false, or highly controversial.

As it stands, the argument that X believes D is straightforwardly invalid. It is no more valid if it takes the following variant form:

P1. X believes A, B, and C.
P2. Y and Z (and others) believe A, B, and C because they believe D.
C1. (Therefore) X believes A, B, and C because X believes D.
C. (Therefore) X believes D

Reversing the two premises, arguments like this are similar to the classic (and straightforwardly fallacious):

P1. Some Xs are A’s.
P2. X-1 is an X.
C. X-1 is an A.

Perhaps, however, there is something more going on in the minds of people who use arguments such as I’ve identified.

Perhaps, on a particular occasion, they think that A,B,C without D is somehow an incoherent package of beliefs, and so they attribute to X what they see as the more coherent A,B,C,D.

Or perhaps they are reasoning inductively from a sociological observation that most people who believe A,B,C also believe D, so X probably believes D. Or maybe, related to the previous paragraph, they think that you could only, rationally, come to believe A,B,C on the basis of first believing D. Or the idea might be that believing D, which is widespread, causes a widespread bias in favour of people believing A,B,C (though D is highly controversial, or clearly false, or some such thing, once it’s explicitly identified).

Although it’s always open to someone to put these sorts of arguments, they are obviously going to be tricky in any particular case. Reasons have to be given as to why D produces a bias, why D might be widely (perhaps subconsciously?) believed even though it is clearly false, or absurd, or whatever, once identified; why the position A,B,C, without D, is incoherent; why there is no other basis for thinking A,B,C; and/or whatever else might be required to make out the particular argument. You need to be careful before you move too quickly to saddle somebody with the absurd or clearly false or highly controversial or just plain horrible proposition D.

That said, the temptation to move quickly and incautiously down this path seems to be a strong one. Often we have enough background beliefs of our own (“Surely no one could possibly believe A,B,C unless they first believe D!”) that we find it very natural to draw the final inference intuitively and almost unconsciously. I know that I sometimes feel this temptation, and I’m sure I’ve often succumbed to it. I don’t think there’s a lot of point in castigating people for it, or even in apologising when caught doing it.

Hasty reasoning of this kind, leaving out steps, and failing to recognise just how difficult and inconclusive such arguments tend to be, is all too tempting. It’s lazy. It cuts corners. It can lead to you paying insufficient attention to what an opponent is really saying. In the extreme, it might encourage you to demonise an opponent (X surely “must” believe the horrible proposition D!) without a good basis. But it is not the sort of thing done only by irrational or ill-willed people.

My proposal is not so much that we go around castigating this way of thinking, which is almost ubiquitous. I don’t want to give real examples of it (and I could, as mentioned above, almost certainly find cases where I’ve done it, too). However, it’s something that we might be more aware of and careful about, given all that I’ve said, and especially as it provides a route to misunderstanding and even demonising opponents. And in some cases, our opponents are right there, taking part in discussion with us, so we can simply ask them: “Are you relying on proposition D?”

All in all, attributing beliefs to opponents needs to be done with great care if they have not expressly relied on or otherwise asserted those particular beliefs. Speculating about what your opponents really think (but are not saying) may not be the worst of intellectual crimes, and it may be very tempting. Sometimes these speculations might even be relevant and useful (say your opponent claims to be relying on “nice”, attractive, good-for-their-public-image premises E and F, but you have independent reason to think they are really reasoning from discredited proposition D).

As always nuance is important, but if we want to be fair, make progress, and avoid flame wars, let’s at least be careful about the kinds of reasoning I’ve discussed. At their worst, they are obviously fallacious. Even at their best, they are highly uncertain and need a lot of work before they can be employed cogently.

Fallacy Interview

On Thursday I did an interview about fallacies. You can hear it here: http://twobeerswithsteve.libsyn.com/

The direct link is http://bit.ly/sidT4N

Enhanced by Zemanta

30 More Fallacies

30-more-fallacies-coverI’m giving the PDF version of my  30 More Fallacies as a pre-Winter Holiday* gift to the readers of Talking Philosophy. I will leave it to your discretion as to whether you have been naughty or nice (and whether the book is a reward for being nice or retribution for being naughty).

As a shameless plug, this book is also available for the Amazon Kindle and the Barnes & Noble Nook for 99 cents at http://www.amazon.com/30-More-Fallacies-ebook/dp/B0051BZ8ZK or http://www.barnesandnoble.com/w/30-more-fallacies-michael-labossiere/1101987481.

For those in the UK, the Kindle book is available for .86 pounds at http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Ddigital-text&field-keywords=30+more+fallacies&x=0&y=0

This book is a follow up to 42 Fallacies, which is also available for the Kindle  for 99 cents at http://www.amazon.com/42-Fallacies-ebook/dp/B004ASOS2O or .86 pounds at http://www.amazon.co.uk/42-Fallacies-ebook/dp/B004ASOS2O/ref=sr_1_1?ie=UTF8&qid=1321051637&sr=8-1

The Nook version is available at http://www.barnesandnoble.com/w/42-fallacies-michael-labossiere/1030759783.

*Perhaps Christmas

Enhanced by Zemanta