Category Archives: Critical Thinking

Defining Our Gods

The theologian Alvin Plantinga was interviewed for The Stone this weekend, making the claim that Atheism is Irrational. His conclusion, however, seems to allow that agnosticism is pretty reasonable, and his thought process is based mostly on the absurdity of the universe and the hope that some kind of God will provide an explanation for whatever we cannot make sense of. These attitudes seem to me to require that we clarify a few things.

There are a variety of different intended meanings behind the word “atheist” as well as the word “God”. I generally make the point that I am atheistic when it comes to personal or specific gods like Zeus, Jehovah, Jesus, Odin, Allah, and so on, but agnostic if we’re talking about deism, that is, when it comes to an unnamed, unknowable, impersonal, original or universal intelligence or source of some kind. If this second force or being were to be referred to as “god” or even spoken of through more specific stories in an attempt to poetically understand some greater meaning, I would have no trouble calling myself agnostic as Plantinga suggests. But if the stories or expectations for afterlife or instructions for communications are meant to be considered as concrete as everyday reality, then I simply think they are as unlikely as Bigfoot or a faked moon landing – in other words, I am atheistic.

There are atheists who like to point out that atheism is ultimately a lack of belief, and therefore as long as you don’t have belief, you are atheistic – basically, those who have traditionally been called agnostics are just as much atheists. The purpose of this seems to be to expand the group of people who will identify more strongly as non-believers, and to avoid nuance – or what might be seen as hesitation – in self-description.

However, this allows for confusion and unnecessary disagreement at times. I think in fact that there are a fair number of people who are atheistic when it comes to very literal gods, like the one Ken Ham was espousing in his debate with Bill Nye. Some people believe, as Ken Ham does, that without a literal creation, the whole idea of God doesn’t make sense, and so believe in creationism because they believe in God. Some share this starting point, but are convinced by science and conclude there is no god. But others reject the premise and don’t connect their religious positions with their understandings of science. It’s a popular jab among atheists that “everyone is atheistic when it comes to someone else’s gods”, but it’s also a useful description of reality. We do all choose to not believe certain things, even if we would not claim absolute certainty.

Plenty of us would concede that only math or closed systems can be certain, so it’s technically possible that any conspiracy theory or mythology at issue is actually true – but still in general it can be considered reasonable not to believe conspiracy theories or mythologies. And if one includes mainstream religious mythologies with the smaller, less popular, less currently practiced ones, being atheistic about Jesus (as a literal, supernatural persona) is not that surprising from standard philosophical perspectives. The key here is that the stories are being looked at from a materialistic point of view – as Hegel pointed out, once spirituality is asked to compete in an empirical domain, it has no chance. It came about to provide insight, meaning, love and hope – not facts, proof, and evidence.

The more deeply debatable issue would be a broadly construed and non-specific deistic entity responsible for life, intelligence or being. An argument can be made that a force of this kind provides a kind of unity to existence that helps to make sense of it. It does seem rather absurd that the universe simply happened, although I am somewhat inclined to the notion that the universe is just absurd. On the other hand, perhaps there is a greater order that is not always evident. I would happily use the word agnostic to describe my opinion about this, and the philosophical discussion regarding whether there is an originating source or natural intelligence to being seems a useful one. However, it should not be considered to be relevant to one’s opinion about supernatural personas who talk to earthlings and interfere in their lives.

There are people who identify as believers who really could be categorized as atheistic in the same way I am about the literal versions of their gods. They understand the stories of their religions as pathways to a closer understanding of a great unspecified deity, but take them no more literally than Platonists take the story of the Cave, which is to say, the stories are meant to be meaningful and the concrete fact-based aspect is basically irrelevant. It’s not a question of history or science: it’s metaphysics. Let’s not pretend any of us know the answer to this one.

Picking between Studies

Illustration of swan-necked flask experiment u...

(Photo credit: Wikipedia)

In my last essay I looked briefly at how to pick between experts. While people often reply on experts when making arguments, they also rely on studies (and experiments). Since most people do not do their own research, the studies mentioned are typically those conducted by others. While using study results in an argument is quite reasonable, making a good argument based on study results requires being able to pick between studies rationally.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick a study based on the fact that it agrees with what you already believe. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it is reasonable to believe it.

Another common approach is to accept a study as correct because the results match what you really want to be true. For example, a liberal might accept a study that claims liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

In some cases, people try to create their own “studies” by appealing to their own anecdotal data about some matter. For example, a person might claim that poor people are lazy based on his experience with some poor people. While anecdotes can be interesting, to take an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations of studies, provided that they have the relevant information about the study. The following provides a concise guide to studies—and experiments.

In normal use, people often jam together studies and experiments. While this is fine for informal purposes, this distinction is actually important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of the experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible—the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether or not the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The key idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment actually shows there is a causal connection it is important to know the size of the sample used. After all, the difference between the experimental and control groups might be rather large, but might not be significant. For example, imagine that an experiment is conducted involving 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is actually not statistically significant: the sample is so small, the difference could be due entirely to chance. The following table shows some information about statistical significance.

Sample Size (Control group + Experimental Group)

Approximate Figure That The Difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
100 13
500 6
1,000 4
1,500 3

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to radiation to test its effect would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a suspected cause. The main difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to radiation by an accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if these is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since the study relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the experiment. After all, in the study the researchers have to take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness, but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause it not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if these is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study—which is why a study of adequate size is important.

Of the three methods, this is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would face the problem that those who chew tobacco are often ex-smokers. As such, the smoking might be the actual cause. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring back to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double blind experiment/study.

Overall, here are some key questions to ask when picking a study:

Was the study/experiment properly conducted?

Was the sample size large enough?

Were the results statistically significant?

Were those conducting the study/experiment experts?

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Picking between Experts

A logic diagram proposed for WP OR to handle a...

A logic diagram proposed for WP OR to handle a situation where two equal experts disagree. (Photo credit: Wikipedia)

One fairly common way to argue is the argument from authority. While people rarely follow the “strict” form of the argument, the basic idea is to infer that a claim is true based on the allegation that the person making the claim is an expert. For example, someone might claim that second hand smoke does not cause cancer because Michael Crichton claimed that it does not. As another example, someone might claim that astral projection/travel is real because Michael Crichton claims it does occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, there are medical experts who claim that second hand smoke does cause cancer.

If you are an expert in the field in question, you can endeavor to pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that second hand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an expert, a person is presumably qualified to make an informed pick. The obvious problem is, of course, that experts themselves pick different experts to accept as being correct.

The problem is even greater when it comes to non-experts who are trying to pick between experts. Being non-experts, they lack the expertise to make authoritative picks between the actual experts based on their own knowledge of the fields. This raises the rather important concern of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One common approach is to pick an expert based on the fact that she agrees with what you already believe. That is, to infer that the expert is right because you believe what she says. This is rather obviously not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because he makes a claim that you really want to be true. For example, a smoker might elect to believe an expert who claims second hand smoke does not cause cancer because he does not want to believe that he might be increasing the risk that his children will get cancer by his smoking around them. This sort of “reasoning” is the classic fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they perceive as positive but that are, in fact, irrelevant to the person’s actually credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not actually relevant to assessing a person’s expertise.  For example, a person might be very likeable, but not know a thing about what they are talking about.

Fortunately, there are some straightforward standards for picking and believing an expert. They are as follows.

 

1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will, obviously, not be well supported. In contrast, claims made by a person with the needed degree of expertise will be supported by the person’s reliability in the area. One rather obvious challenge here is being able to judge that a person has sufficient expertise. In general, the question is whether or not a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

2. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. People often mistake expertise in one area (acting, for example) for expertise in another area (politics, for example).

 

3. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is perhaps the most important factor. As a general rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The basic idea is that the majority of experts are more likely to be right than those who disagree with the majority.

It is important to keep in mind that no field has complete agreement, so some degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could turn out to be wrong. That said, the reason it is still reasonable for non-experts to go with the majority opinion is that non-experts are, by definition, not experts. After all, if I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

4. The person in question is not significantly biased.

This is also a rather important standard. Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then the person’s credibility as an authority is reduced. This is because there would be reason to believe that the expert might not be making a claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice. A biased expert can still be making claims that are true—however, the person’s bias lowers her credibility.

It is important to remember that no person is completely objective. At the very least, a person will be favorable towards her own views (otherwise she would probably not hold them). Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that researchers who receive funding from pharmaceutical companies might be biased while others might claim that the money would not sway them if the drugs proved to be ineffective or harmful.

Disagreement over bias can itself be a very significant dispute. For example, those who doubt that climate change is real often assert that the experts in question are biased in some manner that causes them to say untrue things about the climate. Questioning an expert based on potential bias is a legitimate approach—provided that there is adequate evidence of bias that would be strong enough to unduly influence the expert. One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from a claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are clearly not infallible. However, they do provide a good general guide to logically picking an expert. Certainly more logical than just picking the one who says things one likes.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Hyperbole, Again

English: Protesters at the Taxpayer March on W...

(Photo credit: Wikipedia)

Hyperbole is a rhetorical device in which a person uses an exaggeration or overstatement in order to create a negative or positive feeling. Hyperbole is often combined with a rhetorical analogy. For example, a person might say that someone told “the biggest lie in human history” in order to create a negative impression. It should be noted that not all vivid or extreme language is hyperbole-if the extreme language matches the reality, then it is not hyperbole. So, if the lie was actually the biggest lie in human history, then it would not be hyperbole to make that claim.

People often make use of hyperbole when making rhetorical analogies/comparisons. A rhetorical analogy involves comparing two (or more) things in order to create a negative or positive impression.  For example, a person might be said to be as timid as a mouse or as smart as Einstein. By adding in hyperbole, the comparison can be made more vivid (or possibly ridiculous). For example, a professor who assigns a homework assignment that is due the day before spring break might be compared to Hitler. Speaking of Hitler, hyperbole and rhetorical analogies are stock items in political discourse.

Some Republicans have decided that Obamacare is going to be their main battleground. As such, it is hardly surprising that they have been breaking out the hyperbole in attacking it. Dr. Ben Carson launched an attack by seeming to compare Obamacare to slavery, but the response to this led him to “clarify” his remarks to mean that he thinks Obamacare is not like slavery, but merely the worst thing to happen to the United States since slavery. This would, of course, make it worse than all the wars, the Great Depression, 9/11 and so on.

While he did not make a slavery comparison, Ted Cruz made a Nazi comparison during his filibuster. As Carson did, Cruz and his supporters did their best to “clarify” the remark.

Since slavery and Nazis had been taken, Rick Santorum decided to use the death of Mandela as an opportunity to compare Obamacare to Apartheid.

When not going after Obamacare, Obama himself is a prime target for hyperbole. John McCain, who called out Cruz on his Nazi comparison, could not resist making use of some Nazi hyperbole in his own comparison. When Obama shook Raul Castro’s hand, McCain could not resist comparing Obama to Chamberlain and Castro to Hitler.

Democrats and Independents are not complete strangers to hyperbole, but they do not seem to wield it quite as often (or as awkwardly) as Republicans. There have been exceptions, of course-the sweet allure of a Nazi comparison is bipartisan.  However, my main concern here is not to fill out political scorecards regarding hyperbole. Rather, it is to discuss why such uses of negative hyperbole are problematic.

One point of note is that while hyperbole can be effective at making people feel a certain way (such as angry), its use often suggests that the user has little in the way of substance. After all, if something is truly bad, then there would seem to be no legitimate need to make exaggerated comparisons. In the case of Obamacare, if it is truly awful, then it should suffice to describe its awfulness rather than make comparisons to Nazis, slavery and Apartheid. Of course, it would also be fair to show how it is like these things. Fortunately for America, it is obviously not like them.

One point of moral concern is the fact that making such unreasonable comparisons is an insult to the people who suffered from or fought against such evils. After all, such comparisons transform such horrors as slavery and Apartheid into mere rhetorical chips in the latest political game. To use an analogy, it is somewhat like a person who has played Call of Duty comparing himself to combat veterans of actual wars. Out of respect for those who suffered from and fought against these horrors, they should not be used so lightly and for such base political gameplay.

From the standpoint of critical thinking, such hyperbole should be avoided because it has no logical weight and serves to confuse matters by playing on the emotions. While that is the intent of hyperbole, this is an ill intent. While rhetoric does have its legitimate place (mainly in making speeches less boring) such absurd overstatements impede rather than advance rational discussion and problem solving.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The End Time & Government

Michele Bachmann

Michele Bachmann (Photo credit: Gage Skidmore)

Michelle Bachmann seems to have claimed that Obama’s support of the Syrian rebels is a sign of the End Times:

“[President Barack Obama's support of Syrian rebels] happened and as of today the United States is willingly, knowingly, intentionally sending arms to terrorists, now what this says to me, I’m a believer in Jesus Christ, as I look at the End Times scripture, this says to me that the leaf is on the fig tree and we are to understand the signs of the times, which is your ministry, we are to understand where we are in God’s end times history. [...] And so when we see up is down and right is called wrong, when this is happening, we were told this; that these days would be as the days of Noah. We are seeing that in our time. Yes it gives us fear in some respects because we want the retirement that our parents enjoyed. Well they will, if they know Jesus Christ.”

While Bachmann’s political star seems to be falling, she is apparently still an influential figure and popular with many Tea Party members. As such, it seems worthwhile to address her claims.

Her first claim is a factual matter about the mundane world: she asserts that Obama is “willingly, knowingly, intentionally sending arms to terrorists.” This claim is easy enough to disprove. Despite some pressure (including some from Republicans) to arm the rebels, the administration has taken a very limited approach: rebels that have been determined to not be terrorists will be supported with defensive aid rather than provided with offensive weaponry. Thus, Bachmann (who is occasionally has problems with facts) is wrong on two counts. First, Obama is not sending arms (taken as offensive weapons). Second, he is not sending anything to terrorists.

Now, it could be objected that means of defense are arms, under a broad definition of “arms.” Interestingly, as I learned in the 1980s when the debate topic for a year was arms sales, “arms” can be defined very broadly indeed. If Bachmann defines “arms” broadly enough to include defensive aid, then Obama would be sending arms. However, this is rather a different matter than if Obama were sending offensive weapons, such as the Stinger missiles we provided to the mujahedeen when they were fighting the Russians.

It could also be objected that Obama is sending arms to terrorists. This could be done by claiming that he knows that what he sends to Syria could end up being taken from the intended recipients by terrorists. This is a reasonable point of concern, but it seems clear from her words that she does not mean this.

It could also be done by claiming that Obama is lying and he is, in fact, sending the aid to actual terrorists. Alternatively, it could be claimed that he is sending the aid to non-terrorists, but intends for the terrorists to take it.  While this is possible (Presidents have lied about supplying arms in the past), actual proof would be needed to show that he is doing this with will, knowledge and intent. That is, it would have to be established that Obama knows the people who he is sending the aid to are terrorists and/or that he intends for terrorists to receive these arms. Given the seriousness of the claim, this would require equally serious report. Bachmann does not seem to provide any actual evidence for her accusation, hence there is little reason to place confidence in her claim.

While politicians tend to have a “special” relationship with the truth, Bachmann seems to have an extra-special relationship.

Her second claim is a factual matter about the supernatural world: she seems to be claiming that Obama’s alleged funding of terrorists is a sign of the End Times. While I am not a scholar of the end of the world (despite authoring a fictional version of the End Time), what she is claiming does not seem to be accurate. That is, there seems to be no reference to something adequately similar to Obama funding terrorists as a sign of the End Time. But perhaps Bachmann has access to some special information that has been denied to others.

While predictions that the End Time is near are common, it does seem to be bad theology to make such predictions in the context of Christianity. After all,  the official epistemic line seems to be that no one but God knows when this time will come: “But of that day and that hour knows no man, no, not the angels which are in heaven, neither the Son, but the Father.” As such, any speculation that something is or is not a sign of the End Time would be rather problematic. If the bible is correct about this, Bachmann should not make such a claim–she cannot possibly know that something is a sign of the End Times or not, since no one can know (other than God) when it will occur.

It could be replied that the bible is wrong about this matter and Bachman can know that she has seen a sign and that the End Times are thus approaching. The obvious reply is that if the bible is wrong about this, then it could be wrong about other things–such as there being an End Time at all.

Interestingly, her view of the coming End Time might help explain her positive view of the government shut down. When asked about the shutdown, she said ”It’s exactly what we wanted, and we got it.” While Bachmann has not (as of this writing) claimed that this is also a sign of the End Times, her view that the End Times are approaching would certainly provide an explanation for her lack of concern. After all, if the End Time is fast approaching, then the time of government here on earth is fast approaching its end. Bachmann does seem to think it is on its way.

Weirdly, she also seems to think that Jesus will handle our retirement–which is presumably a reason we will not need the government. She says, “Yes it gives us fear in some respects because we want the retirement that our parents enjoyed. Well they will, if they know Jesus Christ.” This seems to be saying that people who believe the End Time is coming, such as herself, will worry that they will not be able to enjoy their retirement. This seems oddly reasonable: after all, the End Time would certainly clash with the sort of non-end-of-the-world retirement our parents enjoyed. But, oddly enough, she thinks that people who know Jesus will be able to have that retirement, apparently with Jesus providing the benefits rather than the state.

As might be imagined, the fact that Bachmann is an influential figure who apparently has some influence on politics is terrifying enough to itself be a sign of the End Time.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Why you are, categorically, racist (or sexist)

Given the discussion surrounding the Zimmerman verdict, and the recent controversy over Colin McGinn’s resignation due to sexual harassment charges, I thought I would make a brief comment on the larger issue these cases exemplify. In both of these cases, there are arguments to be made on specific incidents and those who defend the men involved do not think they are being racist or sexist—they’re just concerned about details. The problem is that people generally tend to be less concerned about those details when the incidents affect white men, or male students.

If you’re not sure that’s true, watch this ABC experiment which shows a white male, a black male, and a white female all performing the same action of stealing a bike in broad daylight. The results are both not all that surprising and a very solid reminder that small prejudices add up and have enormous impact. The white man is more or less left alone to his business. Some people are curious about what he is doing, but no one really actively interferes. The black man is immediately questioned and people call authorities very quickly. The white woman is approached by men, and they go out of their way to help, even with full knowledge that she is trying to steal the bike.

Obviously it sounds worst for the black man, and it is easy to shrug off the reaction for the woman as really more of a benefit – even when trying to do something illegal, she can get help from strangers. But does she want help? And do these sudden assistants expect anything in return? Even if it is no more than a friendly smile and flirtatious banter, the key to these stories is always how single interactions can add up. If a black man deals with just slightly more suspicion, but deals with it constantly, his life is radically different from the white man’s. Likewise, if a woman faces prurient interest, even if it is meant in fun, and not intended to lead to a sexual relationship, if she faces it from every direction it changes the world she lives in.

These effects are due to a common way that human beings think. It is a claim often made by philosophers that people think in categories — in fact according to some philosophers it is what makes the human mind human. I would argue that things are more complex, and that our ability to conceptualize is a skill and a habit that we develop. It makes it easier for us to hold multiple thoughts together at one time, but at the cost of detail and fine distinction. However, that fine-tuned capacity is still available; it just has to be brought into focus.

But categorical thinking is not the only way that humans parse the world, nor is it unique to human beings. Animals understand categories, just to less complex degrees, when they respond to “fetch” and “trot” and “cracker.” Dogs learn tricks, horses understand a series of different movements, birds and chimps can even communicate with people to a limited extent using words that people invented. More importantly, concepts are not stagnant—they can be altered through imagination, and are not absolutes but, to be meta about it, simply another concept we have come up with to explain the way we organize our reactions and ideas.

And the human mind responds to the world in non-categorical ways as well. For example, when responding to music, people generally do not think in categories, and yet they can make extremely complicated patterns and connections. It is a form of thought probably more complex in humanity than in animals although not unique to our species. Many other examples could be suggested but I’ll save that for another time.

More key here is the idea of recognition of individuals. Though we may at times reduce people to a concept of themselves, we still recognize something unique by a personal name. Such referencing applies to buildings, places, monuments, dates, royal babies and countless other aspects of life as well. The claim of certain schools of thought, like the language philosophers associated with post-Hegelian, Sellarsian, or Wittgensteinian thinkers, is that it is impossible for a human being to think without thinking in concepts: any time a word is used, it refers to a group or type of thing, as well as the unique referent. This is what it means to make a concept, and from Plato through Kant has been touted as monumental in human achievement.

While it is an important aspect of how we organize and stack our thinking, it is central to remember the unique component as much as the categorical. If we think in terms of the individual, it becomes clear that the conceptual aspects are choices we make to significant degrees. Levinas speaks of the importance of the recognition of “the face of the other” in an ethical interaction, and I think it is possible to apply this to our broader interaction with the world. Everything experienced is unique. It may be comparable to other substances or moments, but it is only in laziness, and, after industrialization a strong habituation, that we equate distinct things. We still experience the individual.

Our conceptualizing tendencies overall should be recognized as tools that can both help and harm our understanding. This is undeniable when applied to human beings. The fact that we can make faster decisions by applying broad categories, but that it can result in gross misunderstandings is true of smaller parts of life as well. Being more patient, more nuanced and more observant of the individual case allows a kind of knowledge with fewer assumptions, even if it may allow for less immediate utility.

Some will push for the division between people and other cases (Sartre would argue a free consciousness changes everything, for instance) but even if we were to grant this the problem of thinking in categories remains. The very idea that individuals of any kind can be “exact expressions of one soul” paves the way for a certain habit of thinking. Because we use the same word, we assume the same essence, and come to understand an equivalence as soon as something has been identified. A black man in a hoodie, or a young blonde woman, can face certain presumptions just by belonging to a category, and in time these attitudes can affect the way they understand themselves and behave as well, encouraging the stereotypes.

But if we are able to understand categories as just tentative judgments that help us clarify the world, though sometimes at the cost of complexity, our thought can be more developed. A reflective interplay of incomplete categorization and non-categorical consideration can allow for creativity, originality, and a better chance at reaching something like truth. On the other hand, if we think categories simply reveal essential natures, and we understand races and genders as categories that define people, it becomes a social norm to call the cops on certain bike thieves, leave some alone, and try to flirt with others.

On warranted deference

By their nature, skeptics have a hard time deferring. And they should. One of the classic (currently undervalued) selling points for any course in critical thinking is that it grants people an ability to ratchet down the level of trust that they place in others when it is necessary. However, conservative opinion to the contrary, critical thinkers like trust just fine. We only ask that our trust should be grounded in good reasons in cooperative conversation.

Here are two maxims related to deference that are consistent with critical thinking:

(a) The meanings of words are fixed by authorities who are well informed about a subject. e.g., we defer to the international community of astronomers to tell us what a particular nebula is called, and we defer to them if they should like to redefine their terms of art. On matters of definition, we owe authorities our deference.

(b) An individual’s membership in the group grants them prime facie authority to speak truthfully about the affairs of that group. e.g., if I am speaking to physicists about their experiences as physicists, then all other things equal I will provisionally assume that they are better placed to know about their subject than I am. The physicist may, for all I know, be a complete buffoon. (S)he is a physicist all the same.

These norms strike me as overwhelmingly reasonable. Both follow directly from the assumption that your interlocutor, whoever they are, deserve to be treated with dignity. People should be respected as much as is possible without doing violence to the facts.

Here is what I take to be a banal conclusion:

(c) Members of group (x) ought to defer to group (y) on matters relating to how group (y) is defined. For example, if a philosopher of science tells the scientist what counts as science, then it is time to stop trusting the philosopher.

It should be clear enough that (c) is a direct consequence of (a) and (b).

Here is a claim which is a logical instantiation of (c):

(c’) Members of privileged groups ought to defer to marginalized groups on matters relating to how the marginalized group is defined. For example, if a man gives a woman a lecture on what counts as being womanly, then the man is acting in an absurd way, and the conversation ought to end there.

As it turns out, (c’) is either a controversial claim, or is a claim that is so close to being controversial that it will reliably provoke ire from some sorts of people.

But it should not be controversial when it is understood properly. The trouble, I think, is that (c) and (c’) are close to a different kind of claim, which is genuinely specious:

(d) Members of group (x) ought to defer to group (y) on any matters relating to group (y).

Plainly, (d) is a crap standard. I ought to trust a female doctor to tell me more about my health as a man than I trust myself, or my male barber. The difference between (d) and (c) is that (c) is about definitions (‘what counts as so-and-so’), while (d) is about any old claim whatsoever. Dignity has a central place when it comes to a discussion about what counts as what — but in a discussion of bare facts, there is no substitute for knowledge.

**

Hopefully you’ve agreed with me so far. If so, then maybe I can convince you of a few more things. There are ways that people (including skeptics) are liable to screw up the conversation about warranted deference.

First, unless you are in command of a small army, it is pointless to command silence from people who distrust you. e.g., if Bob thinks I am a complete fool, then while I may say that “Bob should shut up and listen”, I should not expect Bob to listen. I might as well give orders to my cat for all the good it will do.

Second, if somebody is not listening to you, that does not necessarily mean you are being silenced. It only means you are not in a position to have a cooperative conversation with them at that time. To be silenced is to be prevented from speaking, or to be prevented from being heard on the basis of perverse non-reasons (e.g., prejudice and stereotyping).

Third, while intentionally shutting your ears to somebody else is not in itself silencing, it is not characteristically rational either. The strongest dogmatists are the quietest ones. So a critical thinker should still listen to their interlocutors whenever practically possible (except, of course, in cases where they face irrational abuse from the speaker).

Fourth, it is a bad move to reject the idea that other people have any claim to authority, when you are only licensed to point out that their authority is narrowly circumscribed. e.g., if Joe has a degree in organic chemistry, and he makes claims about zoology, then it is fine to point out the limits of his credentials, and not fine to say “Joe has no expertise”. And if Petra is a member of a marginalized group, it is no good to say that Petra has no knowledge of what counts as being part of that group. As a critical thinker, it is better to defer.

[Edit: be sure to check the comments thread for great discussion!]

Violence & Video Games, Yet Again.

Manhunt (video game)

(Photo credit: Wikipedia)

While there is an abundance of violence in the real world, there is also considerable focus on the virtual violence of video games. Interestingly, some people (such as the head of the NRA) blame real violence on the virtual violence of video games. The idea that art can corrupt people is nothing new and dates back at least to Plato’s discussion of the corrupting influence of art. While he was mainly worried about the corrupting influence of tragedy and comedy, he also raised concerns about violence and sex. These days we generally do not worry about the nefarious influence of tragedy and comedy, but there is considerable concern about violence.

While I am a gamer, I do have concerns about the possible influence of video games on actual behavior. For example, one of my published essays is on the distinction between virtual vice and virtual virtue and in this essay I raise concerns about the potential dangers of video games that are focused on vice. While I do have concerns about the impact of video games, there has been little in the way of significant evidence supporting the claim that video games have a meaningful role in causing real-world violence. However, such studies are fairly popular and generally get attention from the media.

The most recent study purports to show that teenage boys might become desensitized to violence because of extensive playing of video games. While some folks will take this study as showing a connection between video games and violence, it is well worth considering the details of the study in the context of causal reasoning involving populations.

When conducting a cause to effect experiment, one rather important factor is the size of experimental group (those exposed to the cause) and the control group (those not exposed to the cause). The smaller the number of subjects, the more likely that the difference between the groups is due to factors other than the (alleged) causal factor. There is also the concern with generalizing the results from the experiment to the whole population.

The experiment in question consisted of 30 boys (ages 13-15) in total. As a sample for determining a causal connection, the sample is too small for real confidence to be placed in the results. There is also the fact that the sample is far too small to support a generalization from the 30 boys to the general population of teenage boys. In fact, the experiment hardly seems worth conducting with such a small sample and is certainly not worth reporting on-except as an illustration of how research should not be conducted.

The researchers had the boys play a violent video game and a non-violent video game in the evening and compared the results. According to the researchers, those who played the violent video game had faster heart rates and lower sleep quality. They also reported “increased feelings of sadness.”  After playing the violent game, the boys  had greater stress and anxiety.

According to one researcher, “The violent game seems to have elicited more stress at bedtime in both groups, and it also seems as if the violent game in general caused some kind of exhaustion. However, the exhaustion didn’t seem to be of the kind that normally promotes good sleep, but rather as a stressful factor that can impair sleep quality.”

Being a veteran of violent video games, these results are consistent with my own experiences. I have found that if I play a combat game, be it a first person shooter, an MMO or a real time strategy game, too close to bedtime, I have trouble sleeping. Crudely put, I find that I am “keyed” up and if I am unable to “calm down” before trying to sleep, my sleep is generally not very restful. I really noticed this when I was raiding in WOW. A raid is a high stress situation (game stress, anyway) that requires hyper-vigilance and it takes time to “come down” from that. I have experienced the same thing with actual fighting (martial arts training, not random violence).  I’ve even experienced something comparable when I’ve been awoken by a big spider crawling on my face-I did not sleep quite so well after that. Graduate school, as might be imagined, put me into this state of poor sleep for about five years.

In general, then, it makes sense that violent video games would have this effect-which is why it is not a good idea to game up until bed time if you want to get a good night’s sleep. Of course, it is a generally a good idea to relax about an hour before bedtime-don’t check email, don’t get on Facebook, don’t do work and so on.

While not playing games before bedtime is a good idea, the question remains as to how these findings connect to violence and video games. According to the researchers, the differences between the two groups “suggest that frequent exposure to violent video games may have a desensitizing effect.”

Laying aside the problem that the sample is far too small to provide significant results that can be reliably extended to the general population of teenage boys, there is also the problem that there seems to be a rather large chasm between the observed behavior (anxiety and lower sleep quality) and being desensitized to violence. The researchers do note that the cause and effect relationship was not established and they did consider the possibility of reversed causation (that the video games are not causing these traits, but that boys with those traits are drawn to violent video games).  As such, the main impact of the study seems to be that it got media attention for the researchers. This would suggest another avenue of research: the corrupting influence of media attention on researching video games and violence.

My Amazon Author Page

Enhanced by Zemanta

For Better or Worse Reasoning in Print

For_Better_or_Worse__Cover_for_KindleWhy listen to  illogical diatribes when you can read them? I mean, read a rational examination of the arguments against same sex marriage.

This concise work is aimed at presenting a logical assessment of the stock arguments against same-sex marriage. While my position is in favor of legalizing same-sex marriage, I have made every effort to present a fair and rational assessment of the stock arguments against it. The work itself is divided into distinct sections. The first section provides some background material regarding arguments. The second section focuses on the common fallacious arguments used to argue against same-sex marriage. The third section examines standard moral arguments against same-sex marriage and this is followed by a brief look at the procreation argument. The work closes, appropriately enough, with a few modest proposals regarding marriage.

Amazon (US)

Amazon (UK)

 

Enhanced by Zemanta

Euphemism

Historic car wreck on car cemetery in Kaufdorf...

A pre-owned car. (Photo credit: Wikipedia)

I was assigned to committee number eight at 5:00 pm today, so I’m facing a bit of a challenge getting regular posts completed on time. I’ve also got the seven year program review, 4 classes and much more…

But, since I am working on a book on rhetoric, I can inflict some rough draft material on you until I either a) get more time or b) die.

Euphemism

When I was a kid, people bought used cars. These days, people buy fine pre-owned cars. There is no difference between the meaning of “used car” and “pre-owned car”—both refer to the same thing, namely a car someone else has owned and used. However, “used” sounds a bit nasty, perhaps suggesting that the car might be a bit sticky in places. In contrast, “pre-owned” sounds rather better. By substituting “pre-owned” for “used”, the car sounds somehow better, although it is the same car whether it is described as used or pre-owned.

If you need to make something that is negative sound positive without actually making it better, then a euphemism should be your tool of choice. A euphemism is a pleasant or at least inoffensive word or phrase that is substituted for a word or phrase that means the same thing but is unpleasant, offensive otherwise negative in terms of its connotation. To use an analogy, using a euphemism is like coating a bitter pill with sugar, making it easier to swallow.

The way to use a euphemism is to replace the key words or phrases that are negative in their connotation with those that are positive (or at least neutral). Naturally, it helps to know what the target audience regards as positive words, but generically positive words can do the trick quite well.

The defense against a euphemisms is to replace the positive term with a neutral term that has the same meaning. For example, if someone say “An American citizen was inadvertently neutralized during a drone strike”, the neutral presentation would be “An American citizen was killed during a drone strike.” While “killed” does have a negative connotation, it does describe the situation with more neutrality.

In some cases, euphemisms are used for commendable reasons, such as being polite in social situations or to avoid exposing children to “adult” concepts. For example, at a funeral it is considered polite to refer the dead person as “the departed” rather than “the corpse.”

Examples of Euphemisms

“Pre-owned” for “used.”

“Neutralization” for “killing.”

“Freedom fighter” for “terrrorist”

“Revenue enhancement” for “tax increase.”

“Down-sized” for “fired.”

“Between jobs” for “unemployed.”

“Passed” for “dead.”

“Office manager” for “secretary.”

“Custodian” for “janitor.”

“Detainee” for “prisoner.”

“Enhanced interrogation” for “torture.”

“Self-injurious behavior incidents” for “suicide attempts.”

“Democrat” for “Communist.”

Enhanced by Zemanta