Tag Archives: fallacy

Ad Baculum, Racism & Sexism

Opposition poster for the 1866 election. Geary...

(Photo credit: Wikipedia)

I was asked to write a post about the ad baculum in the context of sexism and racism. To start things off, an ad baculum is a common fallacy that, like most common fallacies, goes by a variety of names. This particular fallacy is also known as appeal to fear, appeal to force and scare tactics. The basic idea is quite straightforward and the fallacy has a simple form:

Premise: Y is presented (a claim that is intended to produce fear).

Conclusion:  Therefore claim X is true (a claim that is generally, but need not be, related to Y in some manner).

 

This line of “reasoning” is fallacious because creating fear in people (or threatening them) does not constitute evidence that a claim is true. This tactic can be rather effective as a persuasive device since fear can be an effective motivator for belief. But, there is a distinction between a logical reason to accept a claim as true and a motivating reason to believe that a claim is true.

Like all fallacies, ad baculums will serve any master, so they can be employed as a device in “support” of any claim. In the days when racism and sexism were rather more overt in America, ad baculums were commonly employed in the hopes of motivating people to accept (or at least not oppose) racism and sexism. Naturally, the less subtle means of direct threats and physical violence (up to and including murder) were deployed as well.

In the United States of 2014, overt racism and sexism are regarded as unacceptable and those who make racist or sexist claims sometimes find themselves the object of public disapproval. In some cases, making such claims can cost a person his job.

In some cases, it will be claimed that the claims were not actually racist or sexist. In other cases, the racism or sexism will not be denied, but an appeal will be made to freedom of expression and concerns will be raised that a person is being denied his rights when he is subject to a backlash for remarks that some might regard as racist or sexist.

Given that people are sometimes subject to negative consequences for making claims that are seen by some as racist or sexist, it is not unreasonable to consider that ad baculums are sometimes deployed to limit free expression. That is, that the threat of some sort of retaliation is used to persuade people to accept certain claims. Or, at the very least, used in an attempt to silence people.

It is rather important to be clear about an important distinction between an appeal to fear (using fear to get people to believe) and there being negative consequences for a person’s actions. For example, if someone says “you know, young professor, that we carefully consider a person’s view on race and sex before granting tenure…so I certainly hope that you are with us in your beliefs and actions”, then that is an appeal to fear: the young professor is supposed to agree with her colleagues and believe that claims are true because she has been threatened. But, if a young professor realizes that she will fired for yelling things like “go back to England, white devil honkey crackers male-pigs” at her white male students and elects not to do so, she is not a victim of an appeal to fear. To use another example, if I refrain from shouting obscenities at the Dean because I would rather not be fired, I am not a victim of ad baculum. As a final example, if I decide not to say horrible things about my friends because I know that they would reconsider their relationship to me, then I am not a victim of an ad baculum. As such, an ad baculum is not that a person faces potential negative consequences for saying things, it is that a person is supposed to accept a claim as true on the basis of “evidence” that is merely a threat or something intended to create fear. As such, the fact that making claims that could be taken as sexist or racist could result in negative consequences does not entail that anyone is a victim of ad baculum in this context.

What some people seem to be worried about is the possibility of a culture of coercion (typically regarded as leftist) that aims at making people conform to a specific view about sex and race. If there were such a culture or system of coercion that aimed at making people accept claims about race and gender using threats as “evidence”, then there would certainly be ad baculums being deployed.

I certainly will not deny that there are some people who do use ad baculums to try to persuade people to believe claims about sex and race. However, there is the reasonable question of how much this actually impacts discussions of race and gender. There is, of course, the notion that the left has powerful machinery in place to silence dissent and suppress discussions of race and sex that deviate from their agenda. There is also the notion that this view is a straw man of the reality of the situation.

One point of reasonable concern is considering the distinction between views that can be legitimately regarded as warranting negative consequences (that is, a person gets what she deserves for saying such things) and views that should be seen as legitimate points of view, free of negative consequences. For example, if I say that you are an inferior being who is worthy only of being my servant and unworthy of the rights of a true human, then I should certainly expect negative consequences and would certainly deserve some of them.

Since I buy into freedom of expression, I do hold that people should be free to express views that would be regarded as sexist and racist. However, like J.S. Mill, I also hold that people are subject to the consequences of their actions. So, a person is free to tell us one more thing he knows about the Negro, but he should not expect that doing so will be free of consequences.

There is also the way in which such views are considered. For example, if I were to put forth a hypothesis about gender role for scientific consideration and was willing to accept the evidence for or against my hypothesis, then this would be rather different than just insisting that women are only fit for making babies and sandwiches. Since I believe in freedom of inquiry, I accept that even hypotheses that might be regarded as racist or sexist should be given due consideration if they are properly presented and tested according to rigorous standards. For example, some claim that women are more empathetic and even more ethical than men. While that might seem like a sexist view, it is a legitimate point of inquiry and one that can be tested and thus confirmed or disconfirmed. Likewise, the claim that men are better suited for leadership might seem like a sexist view, it is also a legitimate point of inquiry and one that can presumably be investigated. As a final example, inquiring whether or not men are being pushed out of higher education is also a matter of legitimate inquiry—and one I have pursued.

If someone is merely spewing hate and nonsense, I am not very concerned if he gets himself into trouble. After all, actions have consequences. However, I am concerned about the possibility that scare tactics might be used to limit freedom of expression in the context of discussions about race and sex. The challenge here is sorting between cases of legitimate discussion/inquiry and mere racism or sexism.

As noted above, I have written about the possibility of sexism against men in current academics—but I have never been threatened and no attempt has been made to silence me. This might well be because my work never caught the right (or wrong) eyes or it might be because my claims are made as a matter of inquiry and rationally argued. Because of my commitment to these values, I am quite willing to consider examples of cases where sensible and ethical people have attempted to engage in rational and reasonable discussion or inquiry in regards to race or sex and have been subject to attempts to silence them. I am sure there are examples and welcome their inclusion in the comments section.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Incest Argument & Same-Sex Marriage

Marriage March 2013

(Photo credit: American Life League)

One of the stock fallacious arguments against same sex-marriage is the slippery slope argument in which it is contended that allowing same sex-marriage will lead to allowing incestuous marriage. The mistake being made is, of course, that the link between the two is not actually made. Since the slippery slope fallacy is a fallacy, this is obviously a bad argument.

A non-fallacious argument that is also presented against same sex-marriage involves the contention that allowing same-sex marriage on the basis of a certain principle would require that, on the pain of inconsistency, we also accept incestuous marriage. This principle is typically some variant of the principle that a person should be able to marry any other person. Given that incestuous marriage is bad, this would seem to entail that we should not allow same-sex marriage.

My first standard reply to this argument is that if different-sex marriage does not require us to accept incestuous marriage, then neither does accepting same-sex marriage. But, if accepting same-sex marriage entails that we have to accept incestuous marriage, the same would also apply to different-sex marriage. That this is so is shown by the following argument. If same-sex marriage is based on the principle that a person should be allowed to marry the person they wish to marry, then it would seem that different-sex marriage is based on the principle that a person should be allowed to marry the person of the opposite sex they wish to marry. By analogy, if allowing a person to marry any person they want to marry allows incestuous marriage, then allowing a person to marry a member of the opposite sex would also allow incestuous marriage-albeit only to a member of the opposite sex. But, if the slide to incest can be stopped in the case of different-sex marriage, then the same stopping mechanism can be used in the case of same-sex marriage.

In the case of different-sex marriage, there is generally an injunction against people marrying close relatives. This same injunction would certainly seem to be applicable in the case of same-sex marriage. After all, there is nothing about accepting same-sex marriage that inherently requires accepting incestuous marriage.

One possible objection to my reply is that incestuous different-sex marriage is forbidden on the grounds that such relationships could produce children. More specifically, incestuous reproduction tends to be more likely to produce genetic defects which would provide a basis for a utilitarian moral argument against allowing incestuous marriage.  Obviously, same-sex marriages have no possibility of producing children naturally. This would be a relevant difference between same-sex marriage and different-sex marriage. Thus, it could be claimed that while different-sex marriage can be defended from incestuous marriage on these grounds, the same can not be said for same-sex marriage. Once it is allowed, then it would be unprincipled to deny same-sex-incestuous marriage.

There are four obvious replies here.

First, if the only moral problem with incestuous marriage is the higher  possibility of producing children with genetic defects, then incestuous same-sex marriage would not be morally problematic. Ironically, the relevant difference between the two that prevents denying same-sex-incestuous marriage would also make it morally acceptable.

Second, if a different-sex incestuous couple could not reproduce (due to natural or artificial sterility), then this principle would allow them to get married. After all, they are no more capable of producing children than a same-sex couple.

Third, if it could be shown that a different-sex incestuous couple would have the same chance of having healthy children as a non-incestuous couple, then this would allow them to get married. After all, they are no more likely to produce children with genetic defects than a non-incestuous couple.

Fourth, given that the principle is based on genetic defects being more likely than normal, it would follow that unrelated couples who are lkely to produce offspring with genetic defects should not be allowed to be married. After all, the principle is that couples who are likely to produce genetically defective offspring cannot be married. Thanks to advances in genetics, it is (or soon will be) possible (and affordable) to check the “genetic odds” for couples. As such, if incestuous marriage is wrong because of the higher possibility (whatever the level of unnacceptle risk might be) of genetic defects, then the union of unrelated people who have a higher possibiity of genetically defective children would also be wrong. This would seem to entail that if incestuous marriage should be illegal on these grounds, then so too should the union of unrelated people who have a similar chance of producing defective children.

In light of the above, the incest gambit against same-sex marriage would seem to fail. However, it also seems to follow that incestuous marriage would be acceptable in some cases.

Obviously enough, I have an emotional opposition to incest and believe that it should not be allowed. Of course, how I feel about it is no indication of its correctness or incorrectness. I do, of course, have argments against incest.

Many cases of incest involve a lack of consent, coercion or actual rape. Such cases often involve an older relative having sexual relations with a child. This sort of incest is clearly wrong and arguments for this are easy enough to provide-after all, one can make use of the usual arguments against coercion, child molestation and rape.

Where matters get rather more difficult is incest involving two consenting adults-be they of the same or different sexes. After all, the moral arguments that are based on a lack of consent no longer apply. Appealing to tradition will not work here-after all, that is a fallacy. The claim that it makes me uncomfortable or even sick would also not have any logical weight. As J.S. Mill argued, I have no right to prevent people from engaging in consenual activity just because I think it is offensive. What would be needed would be evidence of harm being done to others without their consent.

I have considered the idea that allowing incestuous marriage would be damaging to family relations. That is, the proper moral relations between relatives is such that incest would be harmful to the family as a whole. This is, obviously enough, analogous to the arguments made by those who oppose same-sex marriage. They argue that allowing same-sex marriage would be damaging to family relations because the proper moral relation between a married couple is such that same-sex marriage would damage to the family as a whole. As it stands, the evidence is that same-sex couples do not create such harm. Naturally, there is not much evidence involving incestuous marriages or relationships. However, if it could be shown that incestuous relationships between consenting adults were harmful, then they could thus be justly forbidden on utilitarian grounds. Naturally, the same would hold true of same-sex relationships.

Reflecting on incestuous marriage has, interestingly enough, given me some sympathy for people who have reflected on same-sex marriage and believe that there is something wrong about it. After all, I am against incestuous marriage and thinking of it makes me feel ill. However, I am at a loss for a truly compelling moral argument against it that would not also apply to non-related couples. My best argument, as I see it, is the harm argument. This is, as noted above, analogous to the harm argument used by opponents of same-sex marriage. The main difference is, of course, that the harm arguments presented by opponents of same sex-marriage have been shown to have premises that are not true. For example, claims about the alleged harms to children from having same-sex parents have been shown to be untrue. As such, I am not against same-sex marriage, but I am opposed to incestuous marriage-be it same or different sexes.

My Amazon Author Page

For Better or Worse Reasoning

For-Better-Cover-Cover

 

 

Enhanced by Zemanta

76 Fallacies in Print

76_Fallacies_Cover_for_Kindle

76 Fallacies is now available in print from Amazon and other fine sellers of books.

In addition to combining the content of my 42 Fallacies and 30 More Fallacies, this book features some revisions as well as a new section on common formal fallacies.

As the title indicates, this book presents seventy six fallacies. The focus is on providing the reader with definitions and examples of these common fallacies rather than being a handbook on winning arguments or general logic.

The book presents the following 73 informal fallacies:

Accent, Fallacy of
Accident, Fallacy of
Ad Hominem
Ad Hominem Tu Quoque
Amphiboly, Fallacy of
Anecdotal Evidence, Fallacy Of
Appeal to the Consequences of a Belief
Appeal to Authority, Fallacious
Appeal to Belief
Appeal to Common Practice
Appeal to Emotion
Appeal to Envy
Appeal to Fear
Appeal to Flattery
Appeal to Group Identity
Appeal to Guilt
Appeal to Novelty
Appeal to Pity
Appeal to Popularity
Appeal to Ridicule
Appeal to Spite
Appeal to Tradition
Appeal to Silence
Appeal to Vanity
Argumentum ad Hitlerum
Begging the Question
Biased Generalization
Burden of Proof
Complex Question
Composition, Fallacy of
Confusing Cause and Effect
Confusing Explanations and Excuses
Circumstantial Ad Hominem
Cum Hoc, Ergo Propter Hoc
Division, Fallacy of
Equivocation, Fallacy of
Fallacious Example
Fallacy Fallacy
False Dilemma
Gambler’s Fallacy
Genetic Fallacy
Guilt by Association
Hasty Generalization
Historian’s Fallacy
Illicit Conversion
Ignoring a Common Cause
Incomplete Evidence
Middle Ground
Misleading Vividness
Moving the Goal Posts
Oversimplified Cause
Overconfident Inference from Unknown Statistics
Pathetic Fallacy
Peer Pressure
Personal Attack
Poisoning the Well
Positive Ad Hominem
Post Hoc
Proving X, Concluding Y
Psychologist’s fallacy
Questionable Cause
Rationalization
Red HerringReification, Fallacy of
Relativist Fallacy
Slippery Slope
Special Pleading
Spotlight
Straw Man
Texas Sharpshooter Fallacy
Two Wrongs Make a Right
Victim Fallacy
Weak Analogy

The book contains the following three formal (deductive) fallacies:

Affirming the Consequent
Denying the Antecedent
Undistributed Middle

Enhanced by Zemanta

A, B, C, D – a fallacy

I don’t know whether this fallacy has a name of its own – I’m sure that Mike LaBossiere can tell us if it does – but how often have you seen somebody argue along the following lines?

P1. X believes A, B, and C.
P2. Y and Z (and others) believe A, B, C, and D.
C. (Therefore) X believes D.

What then happens is that X is criticised for believing D, even though D may be a proposition that X has never argued for, expressly relied upon, or even affirmed. In some cases, D may be some horrible proposition that would suggest X is of bad character if X actually believes it. In other cases, it may merely be something absurd, clearly false, or highly controversial.

As it stands, the argument that X believes D is straightforwardly invalid. It is no more valid if it takes the following variant form:

P1. X believes A, B, and C.
P2. Y and Z (and others) believe A, B, and C because they believe D.
C1. (Therefore) X believes A, B, and C because X believes D.
C. (Therefore) X believes D

Reversing the two premises, arguments like this are similar to the classic (and straightforwardly fallacious):

P1. Some Xs are A’s.
P2. X-1 is an X.
C. X-1 is an A.

Perhaps, however, there is something more going on in the minds of people who use arguments such as I’ve identified.

Perhaps, on a particular occasion, they think that A,B,C without D is somehow an incoherent package of beliefs, and so they attribute to X what they see as the more coherent A,B,C,D.

Or perhaps they are reasoning inductively from a sociological observation that most people who believe A,B,C also believe D, so X probably believes D. Or maybe, related to the previous paragraph, they think that you could only, rationally, come to believe A,B,C on the basis of first believing D. Or the idea might be that believing D, which is widespread, causes a widespread bias in favour of people believing A,B,C (though D is highly controversial, or clearly false, or some such thing, once it’s explicitly identified).

Although it’s always open to someone to put these sorts of arguments, they are obviously going to be tricky in any particular case. Reasons have to be given as to why D produces a bias, why D might be widely (perhaps subconsciously?) believed even though it is clearly false, or absurd, or whatever, once identified; why the position A,B,C, without D, is incoherent; why there is no other basis for thinking A,B,C; and/or whatever else might be required to make out the particular argument. You need to be careful before you move too quickly to saddle somebody with the absurd or clearly false or highly controversial or just plain horrible proposition D.

That said, the temptation to move quickly and incautiously down this path seems to be a strong one. Often we have enough background beliefs of our own (“Surely no one could possibly believe A,B,C unless they first believe D!”) that we find it very natural to draw the final inference intuitively and almost unconsciously. I know that I sometimes feel this temptation, and I’m sure I’ve often succumbed to it. I don’t think there’s a lot of point in castigating people for it, or even in apologising when caught doing it.

Hasty reasoning of this kind, leaving out steps, and failing to recognise just how difficult and inconclusive such arguments tend to be, is all too tempting. It’s lazy. It cuts corners. It can lead to you paying insufficient attention to what an opponent is really saying. In the extreme, it might encourage you to demonise an opponent (X surely “must” believe the horrible proposition D!) without a good basis. But it is not the sort of thing done only by irrational or ill-willed people.

My proposal is not so much that we go around castigating this way of thinking, which is almost ubiquitous. I don’t want to give real examples of it (and I could, as mentioned above, almost certainly find cases where I’ve done it, too). However, it’s something that we might be more aware of and careful about, given all that I’ve said, and especially as it provides a route to misunderstanding and even demonising opponents. And in some cases, our opponents are right there, taking part in discussion with us, so we can simply ask them: “Are you relying on proposition D?”

All in all, attributing beliefs to opponents needs to be done with great care if they have not expressly relied on or otherwise asserted those particular beliefs. Speculating about what your opponents really think (but are not saying) may not be the worst of intellectual crimes, and it may be very tempting. Sometimes these speculations might even be relevant and useful (say your opponent claims to be relying on “nice”, attractive, good-for-their-public-image premises E and F, but you have independent reason to think they are really reasoning from discredited proposition D).

As always nuance is important, but if we want to be fair, make progress, and avoid flame wars, let’s at least be careful about the kinds of reasoning I’ve discussed. At their worst, they are obviously fallacious. Even at their best, they are highly uncertain and need a lot of work before they can be employed cogently.

Fallacy Interview

On Thursday I did an interview about fallacies. You can hear it here: http://twobeerswithsteve.libsyn.com/

The direct link is http://bit.ly/sidT4N

Enhanced by Zemanta

30 More Fallacies

30-more-fallacies-coverI’m giving the PDF version of my  30 More Fallacies as a pre-Winter Holiday* gift to the readers of Talking Philosophy. I will leave it to your discretion as to whether you have been naughty or nice (and whether the book is a reward for being nice or retribution for being naughty).

As a shameless plug, this book is also available for the Amazon Kindle and the Barnes & Noble Nook for 99 cents at http://www.amazon.com/30-More-Fallacies-ebook/dp/B0051BZ8ZK or http://www.barnesandnoble.com/w/30-more-fallacies-michael-labossiere/1101987481.

For those in the UK, the Kindle book is available for .86 pounds at http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Ddigital-text&field-keywords=30+more+fallacies&x=0&y=0

This book is a follow up to 42 Fallacies, which is also available for the Kindle  for 99 cents at http://www.amazon.com/42-Fallacies-ebook/dp/B004ASOS2O or .86 pounds at http://www.amazon.co.uk/42-Fallacies-ebook/dp/B004ASOS2O/ref=sr_1_1?ie=UTF8&qid=1321051637&sr=8-1

The Nook version is available at http://www.barnesandnoble.com/w/42-fallacies-michael-labossiere/1030759783.

*Perhaps Christmas

Enhanced by Zemanta

Just Doesn’t Get It

Rhetoric of Reason

Image via Wikipedia

When it comes to persuading people, a catchy bit of rhetoric tends to be far more effective than an actual argument. One rather neat bit of rhetoric that seems to be favored by Tea Party folks and others is the “just doesn’t get it” device.

As a rhetorical device, it is typically used with the intent of dismissing or rejecting a person’s (or group’s) claims or views. For example, someone might say “liberals just don’t get it. They think raising taxes is the way to go.” The idea is that the audience is supposed to accept that liberals are wrong about tax increases on the grounds that its has been asserted that they “just don’t get it.”Obviously enough, saying “they just don’t get it” does not prove that a claim or view is in error.

This method can also be cast as a fallacy, specifically an ad hominem. The idea is that a claim should be rejected based on a personal attack, namely the assertion that the person does not get it. It can also be seen as a genetic fallacy when used against a group.

This method is also sometimes used with the intent of showing that a view is correct, usually by claiming that someone (or some group) that (allegedly) disagrees is wrong. For example, someone might say “liberals just don’t get it. Raising taxes on the job creators hurts the economy.” Obviously enough, saying that someone (or some group) “just doesn’t get it” does not prove (or disprove) anything. What is needed is, obviously enough, evidence that the claim in question is true. In the example, this would involve showing that raising taxes on the job creators hurts the economy.

In general, the psychology behind this method seems to be that when a person says  (or hears)”X doesn’t get it”, he means (or takes it to mean)”X does not believe what I believe” and thus rejects X’s claim. Obviously enough, this is not good reasoning.

It is worth noting that if it can be shown that someone “just doesn’t get it”, then this would not be mere rhetoric or a fallacy. However, what would be needed is evidence that the person is in error and thus does not, in fact, get it.

Enhanced by Zemanta

Is/Ought

David Hume's statements on ethics foreshadowed...

Image via Wikipedia

While on a run in Maine, I happened to be thinking about the Is/Ought problem as well as fallacies. I was also thinking about bears and how many might be about in the woods, but that is another matter.

This problem was most famously put forth by David Hume. Roughly put, the problem is how one might derive an “ought” from an “is.” Inspired by Hume, some folks even go so far as to claim that it is a fallacy to draw a moral ought” from a non-moral “is.” This is, unlike the more common fallacies, rather controversial. After all, it being a fallacy or not hinges on substantial matters in ethics rather than on something far less contentious, like a matter of  simple relevance. While I will not address the core of the matter, I will present some thoughts on the periphery.

As I ran and thought about the problem, I noted that people are often inclined to make moral inferences based on what they think or what they do. To be a bit more specific, people are often inclined to reason in the following two ways. Naturally, this could be expanded but for the sake of brevity I will just consider thought and action.

The first is belief. Not surprisingly, people often “reason” as follows: I/most people/all people believe that X is right (or wrong). Therefore people ought to do X (or ought to not do X). For example, a person might assert that because (they think that) most people believe that same-sex marriage is wrong, it follows that it ought not be done. This is, obviously enough, the fallacy of appeal to belief.

The second  is action. People are also inclined to infer that X is something that ought to be done (or at least allowed) on the basis that it is done by them or most/all people. For example, a person might assert that people ought to be able to steal office supplies because it is something everyone does. This is the classic fallacy of appeal to common practice.

While there are both established fallacies,  it seems somewhat interesting to consider whether or not  they are potentially Is/Ought fallacies when they involve deriving an “ought” from the “is” of belief or action.

On the one hand, it is rather tempting to hold that they are not also Is/Ought errors. After all, it could be argued that the error is exhausted in the context of the specific fallacies and there is no need to consider a supplemental error involving deriving an “ought” from an “is.”

On the other hand, these two fallacies seem to provide a solid foundation for the Is/Ought error that is reasonably well based on established logic. This suggests (but hardly proves) that there might be some merit in considering the Is/Ought fallacy in a slightly different light-that it can actually be regarded as a special “manifestation” of various other fallacies. Or perhaps not.

Enhanced by Zemanta

Straw, Lies & Errors

Straw Man

Straw man is a rather commonly committed fallacy. Interestingly, it is almost as common for people to accuse others of making straw men as it is for people to actually commit said fallacy. Since I am in a phase of holiday laziness, I decided to write a bit about straw men, lies and errors rather than take on a major topic.

Defining the straw man fallacy is easy enough:

The straw man fallacy is committed when a person simply ignores a person’s actual position (argument, theory, etc.) and substitutes a distorted, exaggerated or misrepresented version of that position. This sort of “reasoning” has the following pattern:

1. Person A has position X.
2. Person B presents position Y (which is a distorted version of X).
3. Person B attacks position Y.
4. Therefore X is false/incorrect/flawed.

This sort of “reasoning” is fallacious because attacking a distorted version of a position simply does not constitute an attack on the position itself. One might as well expect an attack on a poor drawing of a person to hurt the person.

Obviously enough, it is reasonable to point out when someone is making a straw man and to note that any attack on the straw man will fail to do any damage to the original version. Of course, it is important to be sure that such an accusation actually fits.

Whether a characterization is a straw man or not depends, obviously enough on what is being characterized. Roughly put,  “strawness’ is a relative thing and what might be a straw man characterization of one person’s position could very well be an accurate description of another person’s view. As such, a person can be wrongly accused of presenting a straw man because the accuser is mistaken about which position the accused is actually describing. I have even noticed that people sometimes assume that the writer must be writing about them when, in fact, the writer is not.

So, before crying straw it is a good idea to see what the person is actually characterizing. While it might seem to be distorted or exaggerated it might really be spot on.

While most straw men are distorted versions of specific views, there is also variation of the straw man which involves presenting a position that “no one” actually holds and attacking it. In many cases, these positions are attributed to vaguely identified groups (feminists, liberals, conservatives, etc.)  rather than specific individuals. While it is obviously legitimate to point out when people do this sort of thing, it should be determined whether the person is actually setting up such a generic straw man. As noted above, there are views that are really held that would tend to seem like willful distortions on the part of the person describing them.

There are various other ways to use straw men, but I’ll leave those for people to bring up in comments.

Switching now to lies, I have noticed that when I teach this fallacy my students inevitably ask about the difference between presenting a straw man and lying.

On the face of it, a straw man would be a form of lie. After all, a person knowingly presenting a distortion or exaggeration with an intent to deceive would seem to be engaged in an act that falls nicely within the kingdom of lies. As such, I do not see any significant problem with characterizing intentional straw men as involving a lie (or lies) as a component.  For example, when the health care bill Obama was supporting was characterized as establishing death panels and attacked on those grounds, then that would seem to qualify as both a straw man and a lie.

That said, there is more to a straw man than merely lying. As noted above, the straw man fallacy involves more than just presenting a distortion-it also involves rejecting the original on the basis of an attack on the distorted version. As such, it would probably be best to say that a straw man makes use of a lie (or lies).

The deceptive aspect of the straw man also brings in a moral element on top of the critical thinking element. After all, engaging in poor reasoning need not be immoral. However, the intentional use of deceit is often morally problematic (although, as people will no doubt point out, there are intuitively appealing exceptions). One obvious concern is that if a position is actually bad enough to morally require that deceits be used to attack it, then it would seem to follow that it could be justly criticized “in the flesh” rather than “in the straw.” No doubt there are exceptions to this as well-positions that are wicked or flawed and yet could not be defeated by arguing against them in their actual forms.

While many straw men do involve intentional deceits, there are others that do not. These are cases that involve errors.

One obvious example of straw man by error is when someone tries to honestly characterize a view and simply gets it wrong because the view is rather difficult to understand. For example, I often see such straw men in student papers when they try to summarize the arguments of a philosopher. These

Another example of straw man by error is when someone presents a straw man out of ignorance, sloppiness or some such reason. For example, a person might receive an email that distorts the Republican position on tax cuts and then go on to use that version in his blog. In this case, the person is not engaged in an intentional exaggeration.

To use an analogy, this could be seen as being a bit like counterfeit money. A person who knowingly creates a straw man is like a counterfeiter: she is created a deceitful product that she hopes others will accept as the real thing. Someone who accepts the straw man and unknowingly passes it on to others is like a person who gets counterfeit money and spends it herself, unaware that she is passing on phony money.

As with counterfeit money, the person who passes the straw man along in ignorance is not morally responsible for the deceit-she is acting in good faith and is also a victim. This, obviously enough, assumes that the person passing it on took a reasonable amount of effort to assess what was passed on to her.

Sticking with the money analogy, if I pick up some flawless counterfeit bills in my change at the grocery store and I pass them on to others when I buy things, I would seem to be an innocent victim. After all, the source is supposed to be safe and the bills pass all the tests I could reasonably be expected to use. However, if I am handed bills from a questionable person or the money looks a bit fishy, then I would be culpable (to a degree) for uncritically accepting them and passing them on to others.

If this analogy holds, then a person who passes on a straw man from others might be called to task for this or might merely be an innocent victim. In some cases it might be rather hard to determine which category a person falls into.

As one final point, people sometimes make the mistake of conflating errors and straw men. For example, I might claim that WikLeaks’ leak was a good thing because it revealed important new information to the public, such as the fact that Saudis provide considerable support to terrorist organizations and the fact that Pakistan also lends support to such groups. In response to this someone might say that I made a straw man because it is already well known that the Saudis and Pakistanis are supporters of terrorist groups.

However, there is a difference between merely being in error and making a straw man. To be specific, being in error is merely being wrong and a straw man is, well, what was defined above. In the example just given, I could be completely wrong (some have claimed that almost everything leaked was already available), however it would not be a straw man because there was no attempt to present a distorted or exaggerated version of a position. The main test (which is not perfect) is to ask what position, if any, is being distorted or exaggerated. If there are not plausible grounds for claiming an act of distortion has occurred, then it is more reasonable to claim that the person is wrong about the facts rather than accusing him of creating a straw man. Naturally, there will be gray areas in which it is not clear what is the most plausible explanation.

Enhanced by Zemanta

Defending 42 Fallacies

One thing I have found interesting about making my popular (in both senses) work on fallacies readily available is that it generates some rather hostile criticisms. In fact, one such criticism, posted as a comment by argumentics,  was removed from this blog site.

When I found that the comment had been deleted, I was somewhat split in my view. On the one hand, allowing comments that go beyond criticism into hostility can be damaging to a blog by allowing the conversation to spiral down rapidly. On the other hand, criticisms should be taken seriously and addressed.

Of course, if someone wants his or her criticism to be taken seriously and considered an addition to the conversation, that person should present his/her comments in a suitable way. That is, in a civil manner.

While I will not reproduce the entirety of the deleted comments, I will present the criticisms made by this person (without the condescending remarks and personal attacks) and reply to them. This is mainly because I do not like to walk away from an attack.

Also, the criticisms raised by argumentics are not new-over the years the same sort of comments have arrived in my email. By addressing what I take to be misinterpretations of my work I hope to lower the chance of other people making the same mistakes.

Argumentics begins by claiming that there is “no single difference between your example of “Inductive Argument” and that of “Inductive Fallacy”. What resembles (and makes them both “inductive”) is that they are deductively invalid: their form is not that of a valid syllogism.”

Argumentics is in error here. What makes an argument inductive is not being deductively invalid. After all, affirming the consequent is an invalid argument but is not classified as an inductive argument.

While inductive arguments are all technically invalid (since an inductive argument can have all true premises and a false conclusion at the same time), they are not intended to be valid and are assessed by different standards.

Turning back to the examples themselves, they are different.

Example of an Inductive Argument
Premise 1: Most American cats are domestic house cats.

Premise 2: Bill is an American cat.

Conclusion: Bill is domestic house cat.

Example of an Inductive Fallacy
Premise 1: Having just arrived in Ohio, I saw a white squirrel.

Conclusion: All Ohio squirrels are white.
(While there are many, many squirrels in Ohio, the white ones are very rare).

The non-fallacious inductive argument is an inductive  syllogism (see comments below)and the specific example is a strong argument. After all, if it is true (which it is) that most American cats are domestic house cats and Bill is an American cat, it is very likely that Bill is a domestic house cat. In short, the truth of the premises makes the conclusion likely to be true and this makes the argument strong.

In the example of the fallacy, the inference is from one example (the white squirrel) to all Ohio squirrels. The truth of the first premise does not make the conclusion likely to be true, hence the reasoning is poor. It is, in fact, a classic example of a hasty generalization.

Argumentics also brings up a not uncommon comment, namely that my examples are not really arguments. For example, s/he asserts that the following is not an argument: “Equal rights for women? Yeah, I’ll support that when they start paying for dinner and taking out the trash! Hah hah! Fetch me another brewski, Mildred.”

Argumentics does raise a reasonable concern here. After all, the imaginary person does not clearly identify his premises or conclusion and could be taken as merely saying stuff rather than as committing an error in reasoning. As such, it would seem to be something of a leap to take this as a fallacy and also it could be contended that I should have provided an example with a clear conclusion and clear premises. For example, a “complete” example would look something like this:

Premise 1: I have mocked the idea of equal rights.
Conclusion: Therefore, women should not have equal rights.

However, the reason why I used the original example is that when people engage in fallacious reasoning in “real life”, they typically do so in a very rough and informal manner. In fact, sometimes it is so rough and informal that it might be a matter of reasonable dispute as to whether or not the person is actually even arguing. However, in the example I gave, the person seems to intend to reject the notion of equal rights for women on the basis of his making fun of the idea, which seems to be an appeal to ridicule.

I am willing to admit that this is a reasonable point of concern and is, in fact, one my students raise: how do we distinguish between a fallacy and someone merely saying things that sort of look like a fallacy (the same applies to non-fallacious arguments)? In some cases, we can clearly tell. In other cases, it can be a matter of judgment. What, I think, is important is being able to tell when good reasoning is absent-either because a fallacy is being committed or because reasoning turns out to be absent altogether.  At a later date, I should write more about this.

Enhanced by Zemanta