Category Archives: Philosophy

Dating: Age is Not Just a Number

Being a philosopher and single again, I have been overthinking the whole dating thing. I suspect that those who give it little or no thought do much better; but I am what I am and therefore I must overthink. An interesting adventure in interaction provided me with something new, or rather old, to think about: age and dating. In this scenario I was talking with a woman and actually had no intention of making any overtures or moves (smooth or otherwise). With some storytelling license in play, we join the story in progress.

Her: Flirt. Flirt. Flirt.

Her: “So, what do you do for work?” Flirt.

Me: “I’m a philosophy professor.”

Her: “At FSU?” Flirt.

Me: “No, literally across the tracks at FAMU.”

Her: “When did you start?” Flirt.

Me: “1993.”

Her: “1993…how old are you?”

Me: “Fifty.”

At this point, she dropped out of flirt mode so hard that it damaged the space-time continuum. Windows cracked. Tiny fires broke out in her hair. Car alarms went off. Pokémon died. Squirrels were driven mad and fled in terror, crying out to their dark rodent gods for salvation. As my friend Julie commented, I had “instantly gone from sexable to invisible.”  Here is how the conversation ended:

Her: “Um, I bet my mother would like you. Oh, look at the time…I have to go now.”

Me: “Bye.”

While some might have found such an experience ego-damaging, my friends know I have an adamantine ego. Also, I am always glad to get a good story that provides an opportunity for some philosophical analysis. What struck me most about this episode is that the radical change in her behavior was due entirely to her learning my age—I can only infer that she had incorrectly estimated I was younger than fifty. Perhaps she had forgotten to put in her contacts. So, on to the matter of age and dating.

While some might claim that age is just a number, that is not true. Age is rather more than that. At the very least, it is clearly a major factor in how people select or reject potential dates. On the face of it, the use of age as a judging factor should be seen as perfectly fine and is no doubt grounded in evolution. The reason is, of course, that dating is largely a matter of attraction and this is strongly influenced by preferences. One person might desire the feeble hug of a needy nerd, while another might crave the crushing embrace of a jock dumb as a rock. Some might swoon for eyes so blue, while others might have nothing to do with a man unless he rows crew. Likewise, people have clear preferences about age. In general, people prefer those close to them in age, unless there are other factors in play. Men, so the stereotype goes, have a marked preference for younger and younger women the older and older they get. Women, so the stereotype goes, will tolerate a wrinkly old coot provided that he has sufficient stacks of the fattest loot.

Preferences in dating are, I would say, analogous to preferences about food. One cannot be wrong about these and there are no grounds for condemning or praising such preferences. If Sally likes steak and tall guys, she just does. If Sam likes veggie burgers and winsome blondes, he just does. As such, if a person prefers a specific age range, that is completely and obviously their right. As with food preferences, there is little point in trying to argue—people like what they like and dislike what they dislike. That said, there are some things that might seem to go beyond mere preferences. To illustrate, I will offer some examples.

There are white people who would never date a black person. There are black people who would never date anyone but another black person. There are people who would never date a Jew. There are others for whom only a Jew will do. Depending on the cause of these preferences, they might be better categorized as biases or even prejudices. But, it is worth considering that these might be benign preferences. That, for example, a white person has no racial bias, they just prefer light skins to dark skins for the same sort of reason one might prefer brunettes to blondes. Then again, they might not be so benign.

People are chock full of biases and prejudices and it should come as no surprise that they influence dating behavior. On the one hand, it is tempting to simply accept these prejudices in this context on the grounds that dating is entirely a matter of personal choice. On the other hand, it could be argued that such prejudices are problematic even in the context of dating. This is not to claim that people should be subject to some sort of compelled diversity dating, just that perhaps they should be criticized.

When it comes to apparent prejudices, it is worth considering that the apparent prejudice might be a matter of innocent ignorance. That is, the person merely lacks correct information. Assuming the person is not willfully and actively ignorant, this is not to be condemned as a moral flaw since it can be easily fixed by the truth. To go back to the food analogy, imagine that Jane prefers Big Macs because she thinks they are healthy and refuses to eat avocadoes because she thinks they are unhealthy. Given what she thinks, it is reasonable for her to eat Big Macs and avoid avocadoes. If she knew the truth, she would change her eating habits since she wants to eat healthy—she is merely ignorant. Likewise, if Jane believed that black men are all uneducated thugs, then it would seem reasonable for her to not to want to date a black man given what she thinks she knows. If she knew the truth, her view would change. As such, she is not prejudiced—just ignorant.

It is also worth considering that an apparent prejudice is a real prejudice—that the person would either refuse to accept facts or would still maintain the same behavior in the face of the facts. As an example, suppose that Sam thinks that white people are all complete racists and thus refuses to even consider dating a white person on this basis. While it is often claimed that everyone is racist, it is clear that not all white people are complete racists. As such, if Sam persisted in his belief or behavior in the face of the facts, then it would be reasonable to condemn him for his prejudices.

Finally, it might even be the case that the alleged prejudice is actually rational and well founded. To use a food analogy, a person who will not eat raw steak because she knows the health risks is not prejudiced but quite reasonable. Likewise, a person who will not date a person who is a known cheater is not prejudiced but quite rational.

The question at this point is where does age fit in regard to the above considerations. The easy and obvious answer is that it can fall into all three. If a person’s dating decisions are based on incorrect information about age, then they have made an error of ignorance. If a person’s decisions are based on mere prejudice, then they have made a moral error. But, if the decision regarding age and dating is rational and well founded, then the person would have made a good decision. As should be suspected, the specifics of the situation are what matter. That said, there are some general categories relating to age that are worth considering.

Being fifty, I am considering these matters from the perspective of someone old. Honesty compels me to admit that I am influenced by my own biases here and, as my friend Julie has pointed out, older men are full of delusions about age. However, I will endeavor to be objective and will lay out my reasoning for your assessment.

The first is the matter of health. In general, as people get older, their health declines. For example, older people are more likely to have colon cancer—hence people who are not at risk do not get colonoscopies until fifty. Because of this, it is quite reasonable for a younger person to be concerned about dating someone older—that person is more likely to get ill. That said, an older person can be far healthier than a younger person. As such, it might come down to whether or not a person looks at dating option broadly in terms of categories of people (such as age or ethnicity) or is more willing to consider individuals who might differ from the stereotypes of said categories. Using categories does help speed up decisions, although doing so might result in missed opportunities. But, there are billions of humans—so categories could be just fine if one wants to narrow their focus.

While an older person might not be sick, age does weaken the body. For example, I remember being bitterly disappointed by a shameful 16:28 5K in my youth. Now I have to struggle to maintain that pace for a half mile. Back then I could easily do 90-100 miles a week; now I do 50-60. Time is cruel. For those who are concerned about a person’s activity levels, age is clearly a relevant factor and provides a reasonable basis for not dating an older (or younger) person that is neither an error nor a prejudice. However, an older person can be far more fit and active than a younger person—so that is worth considering before rejecting an entire category of people.

Life expectancy is also part of the health concerns. A younger person interested in a long term relationship would need to consider how long that long term might be and this would be quite rational. To use an obvious analogy, when buying a car, one should consider the miles on it. Women also live longer than men, so that is a consideration as well. Since I am fifty-year-old American living in Florida, the statistics say I have about 26 years left. Death sets a clear limit to how long term a relationship can be. But, life expectancy and quality of life are influenced by many factors and they might be worth considering. Or not. Because, you know, death.

The second broad category is that of interests and culture. Each person is born into a specific temporal culture and that shapes her interests. For example, musical taste is typically set in this way and older folks famously differ in their music from younger folks. What was once rebellious rock becomes a golden oldie. Fashion is also very much a matter of time, although styles have a weird way of cycling back into vogue, like those damn bell bottoms. Thus people who differ in age are people from different cultures and that presents a real challenge. An old person who tries to act young typically only succeeds in appearing absurd. One who does not try will presumably not fit in with a younger person. So, either way is a path to failure. Epic failure.

There is also the fact that interests change as a person gets older. To use some stereotypes, older folks are supposed to love shuffleboard and bingo while the youth are now into extreme things that would presumably kill or baffle old people, like virtual reality and Snapchat. Party behavior also differs. Young folks go to parties to drink, talk about their jobs and get laid. Older folks go to parties to drink, talk about their jobs and get laid. These are radical differences that cannot be overcome. It could be countered that there can be shared interests between people of different ages and that a lack of shared interests is obviously not limited to those who differ in age. The response is that perhaps the age difference would generally result in too much of a difference in interests, thus making avoiding dating people who differ enough in age rational and reasonable.

The third broad category is concerns about disparities in power. An older adult will typically have a power advantage over a younger adult and this raises moral concerns regarding exploitation (there is also a reverse concern: that a younger person will exploit an older person). Because of this, a younger adult should be rightly concerned about being at a disadvantage relative to an older person. Of course, this concern is not just limited to age. If the concern about power disparity is important, then it would also apply to disparities in education, income, abilities and intelligence between people in the same age group. That said, the disparities would tend to be increased with an age difference. As such, it is reasonable to be concerned about this factor.

The fourth broad category is what could be called the “ick factor.” While there is considerable social tolerance for rich old men having hot young partners, people dating or attempting to date outside of their socially defined age categories are often condemned because it is seen as “icky” or “gross.” When I was in graduate school, I remember people commenting on how gross it was for old faculty to hook up with young graduate students. Laying aside the exploitation and unprofessionalism, it did seem rather gross. As such, the ick argument has considerable appeal. But, there is the question of whether the perceived grossness is founded or not. On the one hand, it can be argued that grossness is in the eye of the beholder or that grossness is set by social norms and these serve as proper foundations. On the other hand, it could be contended that the perception of grossness is a mere unfounded prejudice. On the third hand, the grossness could be cashed out in terms of the above categories. For example, it is icky for an unhealthy and weak rich man to date a hot, healthy young woman with whom he has no real common interests (beyond money, of course).

Fortunately, this is a problem with a clear solution: if you do not die early, you get old. Then you die. Problem solved.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Trumpernaut

When Trump began his bid for the Presidency in 2015, it was largely dismissed as a joke. He then trounced his Republican opponents. So as to not let them forget their shame, Trump still occasionally takes shots at his fallen rivals. As this is being written, Trump has a very real chance of winning the election, sending Hillary Clinton’s dream of being the first female president into the flaming dumpster of history.

Trump’s success was a shock to the elites of many realms, from the top pundits to the Republican leadership. Liberal intellectuals, who once mocked Trump with witty remarks between sips of their gluten free lattes, are now moping the sweat from their fevered brows with woven hemp handkerchiefs. Sane commentators predicted, with each horrific spew from Trump’s word port, that Trump would be brought down with a huge and luxurious self-inflicted wound. Now the sane commentators have gazed into the mouth of madness and have accepted that there seems to be nothing that Trump can say that would derail the onslaught of the Trumpernaut.

Trump’s run, win or lose, will be a treasure trove for many dissertations in psychology, political science and other fields as thinking people try to analyze this phenomenon from the perspective of history. There is, of course, considerable speculation about the foundation for Trump’s success. Or, more accurately, his lack of failure.

As someone who teaches critical thinking, one of the most striking thing about Trump’s success is that many of the reasons Trump supporters give for supporting Trump are objectively unfounded in reality. One of the main mantras of Trump backers is that Trump “tells it like it is.” The usual meaning of these words is that a person is saying what is true. After all, “like it is” is supposed to refer to what the world in fact is and not what is not. As a matter of objective fact, Trump rarely “tells it like it is.” The proof of this can be found on Trump’s Politifact page. 4% of Trump’s claims have been evaluated as true and 11% as mostly true. This is hardly like it is. Yet, Trump supporters persist in claiming that he tells it like it is, despite the fact that he does not.

One possible explanation is that his supporters believe his claims. If so, they would certainly think that he tells it like it is. This would require either never making an inquiry into the truth of Trump’s claims or refusing to accept the inquiries that have been made. Trump has, of course, availed himself of a sword forged and often wielded by other Republicans, which is the attack on the “liberal media” as biased. This allows any assessment of Trump’s claims to be dismissed.

Another possibility is that their use of the phrase is meaningless, a mere parroting of Trump’s talking point. This would be analogous to the repetition of other empty advertising slogans, like “it gets clothes brighter than bright” or, for those more cynical than I, “hope and change.” If someone is asked why they back Trump, they typically feel the need to present a reason, and this empty saying no doubt pops into the mind.

His supporters also claim that they back him because of his great business success. While it is true that the Trump brand is known worldwide, it is not clear that he has been a great success in business. Newsweek, which was once a success itself, has done a rundown of Trump’s many business failures. While it is true that Trump’s people have skillfully used the bankruptcy laws and threats of lawsuits, this seems to be rather different from the sort of business success that people attribute to him. Some critics have speculated that Trump is refusing to release his tax forms (which he can—the IRS does not forbid people being audited from releasing their forms) because they would show he is not as wealthy as he claims. This is, of course, speculation and Trump could have other good reasons for not releasing the forms. Of course, some might make use of the classic cry of “what is he hiding?” Trump can, obviously, claim to be something of a success: he is world famous and clearly has his name on many things.

Trump supporters also use the talking point that Trump is not politically correct. This is true—Trump relentlessly says things that horrify and terrify the guardians of political correctness. To those who are tired of the political correctness enforcers, this is very appealing.

However, Trump goes far beyond not being politically correct and, some would claim, he heads into racism and sexism. This has suggested to some critics that Trump’s backers are racists and sexists who like what he has to say.  He also routinely crosses boundaries of decency that, until Trump, most Americans thought no candidate (or decent human being) would cross. The latest example is his battle with the Khan family, whose son was an Army captain killed in Iraq. Normally a savage attack on a Gold Star family would be a death blow to a candidate. However, while Trump’s backers often condemn his remarks, they stick with him. One possibility is that although they condemn his remarks in public, they secretly agree with these claims. Another possibility is that the offenses are condemned but are not regarded as serious enough to break the deal. This would, of course, require that there be other motives to support Trump.

For many, the best reason to back Trump is that he is not Hillary Clinton. As pundits like to point out, Trump and Hillary have record high unfavorable ratings. There are also people who are party loyalists (or at least party pragmatists) who support Trump because he is the Republican candidate. Interestingly, Trump is also attracting support from voters who have traditionally backed the Democrats—that is, working class whites.

A final talking point used by Trump supporters is that he is against the elites. This is amazing in its irony: Trump was born into wealth and has always been among the moneyed elites. That said, Trump does have a persona that some would regard as crude and non-elite. Trump is tapping into a very real sense of anger and desperation among Americans who believe, with complete correctness, that they have largely been abandoned by the elites. I certainly get this. I am from Old Town, Maine—a very small town that relied on the paper mill for employment and tax revenue. After ownership of the mill shifted a few times, the last owner shut down operations, presumably going overseas. When I was a kid, the mill smelled bad—which my dad called the “smell of money.” That smell is now gone, and my hometown is struggling. My dad said that there are about fifty abandoned houses in town, and on my runs I saw many empty houses—including the house I grew up in. Meanwhile, we get to see app billionaires on the Late Show with Stephen Colbert talk about their billions. Those who dig into the numbers see that the elites have consistently gotten their way at the expense of the rest of us; that the economic success at the top has not trickled down, and that we will be worse off than our predecessors. Our elites have failed us and we have failed by making them our elites.

Trump, the elite billionaire who got his start with a “little loan” of a million dollars from his father, is able to somehow tap into this anger. Most likely because Hillary is clearly identified with the elites that have failed us so badly. That is, Trump is seen as the only viable option, the only voice for the non-elite.

This itself is a sign of the failure of our elites—that so many people regard Trump as their only hope. Or perhaps they see him as someone who will burn it all in an act of vengeance against the elites. While I do understand the rage against the failures of the elite and get that Hillary is the elitist of the elite, Trump is not the savior of America. Voting for Hillary is essentially voting for more of the same. But voting for Trump is to vote for disaster.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tearing Down

speaking at CPAC in Washington D.C. on Februar...

Politics has always been a nasty business, but the fact that examples of historic awfulness can be easily found does not excuse the current viciousness. After all, appealing to tradition (reasoning that something is acceptable because it has been done a long time) and appealing to common practice (reasoning that something being commonly done makes it acceptable) are both fallacies.

One manifestation of the nastiness of politics is when it does not suffice to merely regard an opponent as wrong, they must be torn down and cast as morally wicked. To be fair, there are cases in which people really are both wrong and morally wicked. As such, my concern is with cases in which the tearing down is not warranted.

I certainly understand the psychological appeal of this approach. It is natural to regard opponents as holding on to their views because they are bad people—in contrast to the moral purity that grounds one’s own important beliefs. In some cases, there is a real conflict between good and evil. For example, those who oppose slavery are morally better than those who practice the enslavement of their fellow human beings. However, most political disputes are disagreements in which all sides are a blend of right and wrong—both factually and morally. For example, the various views about the proper size of government tend to be blended in this way. Unfortunately, political ideology can become part of a person’s core identity—thus making any differing view appear as a vicious assault on the person themselves. A challenge to their very identity that could only come from the vilest of knaves. Politicians and pundits also intentionally stoke these fires, hoping to exploit irrationality and ungrounded righteous rage to ensure their election and to get their way.

While academic philosophy is not a bastion of pure objective rationality, one of the most important lessons I have learned in my career is that a person can disagree with me about an important issue, yet still be a fine human being. Or, at the very least, not a bad person. In some cases, this is easy to do because I do not have a strong commitment to my position. For example, while I do not buy into Plato’s theory of forms, I have no real emotional investment in opposing it. In other cases, such as moral disputes, it is rather more difficult. Even in cases in which I have very strong commitments, I have learned to pause and consider the merits of my opponent’s position while also taking care to distinguish the philosophical position taken from the person who takes it. I also take care to regard their criticisms of my view as being against my view and not against me as a person. This allows me to debate the issue without it becoming a personal matter that threatens my core identity. It also helps that I know that simply attacking the person making a claim is just some form of an ad hominem fallacy.

It might be objected that this sort of approach to disputes is bloodless and unmanly—that one should engage with passion and perhaps, as Trump would say, want to hit someone. The easy reply is that while there is a time and a place for punching, the point of a dispute over an issue is to resolve it in a rational manner. A person can also be passionate without being uncivil and vicious. Unfortunately, vicious attacks are part of the political toolkit.

One recent and reprehensible example involves the attacks on Ghazala and Khizr Khan, the parents of Captain HumayunKhan (who was killed in Iraq in 2004). Khizr Khan spoke out against Donald Trump’s anti Muslim rhetoric and asserted that Trump did not understand the Constitution. While Trump had every right to address the criticisms raised against him, he took his usual approach of trying to tear down a critic. Trump’s engagement with the family led to bipartisan responses, including an extensive response from John McCain, who was tortured as a prison of war during the Vietnam War. Trump, against the rules of basic decency, continued to launch attacks on Khan.

Since I have a diverse group of friends, I was not surprised when I saw posts appearing on Facebook attacking Khan. One set of posts linked to Shoebat.com’s claim that Khan “is a Muslim brotherhood agent who wants to advance sharia law and bring Muslims into the United States.” As should come as no surprise, Snopes quickly debunked this claim.

Breitbart.com also leaped into the fray asserting that Khan “financially benefits from unfettered pay-to-play Muslim migration into America.” The site also claimed that Khan had deleted his law firm’s website. On the one hand, it is certainly legitimate journalism to investigate speakers at the national convention. After all, undue bias legitimately damages credibility and it is certainly good to know about any relevant misdeeds lurking in a person’s past. On the other hand, endeavoring to tear a person down and thus “refute” their criticism is simply an exercise in the ad hominem fallacy. This is bad reasoning in which an attack on a person is taken to thus refute their claims. Even if Khan ran a “pay to play” system and even if he backed Sharia law, his criticisms of Donald Trump stand or fall on their own merits—and they clearly have merit.  There is also the moral awfulness in trying to tear down a Gold Star family. As many have pointed out, such an attack would normally be beyond the pale. Trump, however, operates far beyond this territory. What is one of the worst aspects of this is that although he draws criticism even from the Republican leadership, his support remains strong. He is, perhaps, changing the boundaries of acceptable behavior in a way that might endure beyond his campaign—a change for the worse.

It might be objected that a politician must reply to critics, otherwise the attacks will stand. While this is a reasonable point, the reply made matters. It is one thing to respond to the criticisms by countering their content, quite another to launch a personal attack against a Gold Star family.

It could also be objected that engaging in a rational discussion of the actual issues is too difficult and would not be understood by the public. They can only handle emotional appeals and simplistic notions. Moral distinctions are irrelevant and decency is obsolete. Hence, the public discourse must be conducted at a low level—Trump gets this and is acting accordingly. My only reply is that I hope, but cannot prove, that this is not the case.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Divisive Obama

Official photographic portrait of US President...

One of the relentless talking points of conservative pundits and many Republicans is that Obama is divisive. Perhaps even the most divisive president in American history. It is, in fact, a common practice to engage in a point-by-point analysis of Obama’s alleged divisiveness. As should be expected, supporters of Obama deny that he is divisive; or at least claim he is not the most divisive president.

It is almost certainly pointless to try to argue about the issue of whether Obama is divisive or not. Since this is a matter of political identity, the vast majority of people cannot be influenced by any amount of evidence or argumentation against their position. However, one of the purposes of philosophy is the rational assessment of beliefs even when doing so will convince no one to change their views. That said, this endeavor is not pointless: while I do not expect to change any hearts (for this is a matter of feeling and not reason) it is still worthwhile to advance our understanding of divisiveness and accusations about it.

Since analogies are often useful to enhancing understanding, I will make a comparison with fright. This requires a story from my own past. When I was in high school, our English teacher suggested a class trip to Europe. As with just about anything involving education, fundraising was necessary and this included what amounted to begging (with permission) at the local Shop N’ Save grocery store. As beggars, we worked in teams of two and I was paired up with Gopal. When the teacher found out about this (and our failure to secure much, if any, cash) she was horrified: we were frightening the old people; hence they were not inclined to even approach us, let alone donate to send us to Europe. As I recall, she said the old folks saw us as “thugs.”

I have no reason to doubt that some of the old folks were, in fact, frightened of us. As such, it is true that we were frightening. The same can be said about Obama: it is obviously true that many people see him as divisive and thus he is divisive. This is also analogous to being offensive: if a person is offended by, for example, a person’s Christian faith or her heterosexuality, then those things are offensive. To use another analogy, if a Christian is hired into a philosophy department composed mainly of devout atheists and they dislike her for her faith and it causes trouble in the department, the she is divisive. After all, the department would not be divided but for her being Christian.

While it is tempting to leave it at this, there seems more to the charge of divisiveness than a mere assertion about how other people respond to a person. After all, when Obama is accused of being divisive, the flaw is supposed to lie with Obama—he is condemned for this. As such, the charge of divisiveness involves placing blame on the divider. This leads to the obvious question about whether or not the response is justified.

Turning back to my perceived thuggery at Shop N’ Save, while it was true that Gopal and I frightened some old people, the question is whether or not they were justified in their fear. I would say not, but since I am biased in my own favor I need to support this claim. While Gopal and I were both young men (and thus a source of fear to some), we were hardly thugs. In fact, we were hardcore nerds: we played Advanced Dungeons & Dragons, we were on the debate team, and we did the nerdiest of sports—track. For teenagers, we were polite and well behaved. We were certainly not inclined to engage in any thuggery towards older folks in the grocery store. As such, the fear was unwarranted. In fairness, the old people might not have known this.

In the case of Obama, the question is whether or not his alleged divisiveness has a foundation. This would involve assessing his words and deeds to determine if an objective observer would regard them as divisive. In this case, divisive words and deeds would be such that initially neutral and unbiased Americans would be moved apart and inclined to regard each other with hostility. There is, of course, an almost insurmountable obstacle here: those who regard Obama as divisive will perceive his words and deeds as having these qualities and will insist that a truly objective observer would see things as they do. His supporters will, of course, contend the opposite. While Obama has spoken more honestly and openly about such subjects as race than past presidents, his words and deeds do not seem to be such that a neutral person would be turned against other Americans on their basis. He does not, for example, make sweeping and hateful claims based on race and religion. Naturally, those who think Obama is divisive will think I am merely expressing my alleged liberal biases while they regard themselves as gazing upon his divisiveness via the illumination of the light of pure truth. Should Trump win in 2016, the Democrats will certainly accuse him of being divisive—and his supporters will insist that he is a uniter and not a divider. While whether or not a claim of divisiveness is well founded is a matter of concern, there is also the matter of intent. It is to this I now turn.

Continuing the analogy, a person could have qualities that frighten others and legitimately do so; yet the person might have no intention of creating such fear. For example, a person might not understand social rules about how close he should get to other people and when he can and cannot tough others. His behavior might thus scare people, but acting from ignorance rather than malice, he has no intention to scare others—in fact, he might intend quite the opposite. Such a person could be blamed for the fear he creates to the degree that he should know better, but intent would certainly matter. After all, to frighten through ignorance is rather different from intentionally frightening people.

The same can be true of divisiveness: a person might divide in ignorance and perhaps do so while attempting to bring about greater unity. If the divisive person does not intend to be divisive, then the appropriate response would be (to borrow from Socrates) take the person aside and assist them in correcting their behavior. If a person intends to be divisive, then they would deserve blame for whatever success they achieve and whatever harm they cause. While intent can be difficult to establish (since the minds of others are inaccessible), consideration of what a person does can go a long way in making this determination. In the case of Obama, his intent does not seem to be to divide Americans. Naturally, those who think Obama is divisive will tend to also accept that he is an intentionally divider (rather than an accidental divider) and will attribute nefarious motives to him. Those who support him will do the opposite. There is, of course, almost no possibility of reason and evidence changing the minds of the committed about this matter. However, it is certainly worth the effort to try to consider the evidence or lack of evidence for the claim that Obama is an intentional divider. I do not believe that he is the most divisive president ever or even particularly divisive in a sense that is blameworthy. It is true that some disagree with him and dislike him; but it is their choice to expand the divide rather than close it. It is like a person who runs away, all the while insisting the other person is the one to blame for the growing distance. In closing, what I have written will change no minds—those who think Obama is divisive still think that. Those who think otherwise, still think as they did before. This is, after all, a matter of how people feel rather than a matter of reason.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Third Parties & Voting for the Lesser Evil

English: An artist's visual representation of ...

I, along with some other philosophers, was recently interviewed about voting for an article by Olivia Goldhill of Quartz. While I certainly stand by what I said, interviews do have inherent problems. One common problem is the lack of depth. In some cases, this is due to the interview being short. For the Quartz piece, I spoke to the author for about five minutes. In other cases, the interview might be longer, but the content must be slashed down to fit in a limited amount of time or space. An interview I did about D&D alignments and the real world was about thirty minutes long; but only a few minutes were used in the final broadcast. Another problem is that material aimed at the general public typically has to be simplified. This is because most people are not experts on the subject at hand. As such, I need to expand a bit on my quote in the article.

After briefly discussing the difference between deontological and utilitarian approaches to voting, I presented my soundbite view of the issue

 “As a citizen, I have a duty to others because it’s not just me and my principles, but everybody. I have to consider how what I do will impact other people. For example, if I was a die-hard Bernie supporter, I might say my principles tell me to vote for Bernie. But I’m not going to let my principles condemn other people to suffering.”

Interestingly enough, my position can be taken as either a deontological approach or a utilitarian approach. For the deontologist, an action is right or wrong in and of itself—the consequences are not what matter morally. For the utilitarian, the morality of an action is determined by its consequences. Looked at from a deontological perspective, acting on a duty to the general good would be the right thing to do. The fact that doing so would have good consequences is not what makes the action good. From the utilitarian perspective, the foundation of my duty would be utility: I should do what brings about the greatest good for the greatest number.

In the upcoming election, I intend to follow my principle. While I voted for Sanders in the primary and prefer him over Hillary, I think that a Trump presidency would be vastly worse for the country as a whole than another Clinton presidency. Hillary, as I see her, is essentially a 1990s moderate Republican with a modern liberal paint job. As such, she can be counted on as a competent business as usual politician who will march along with the majority of the population in regards to social policy (such as same sex marriage and gun regulation). Trump has no experience in office and I have no real idea what he would do as president. As such, I am taking the classic approach of choosing the lesser evil and the devil I know. If I was voting for the greater evil, Cthulhu would have my vote.

It might be objected that my approach is flawed. After all, if a person votes based on a rational assessment of the impact of an election on everyone, then she could end up voting against her own self-interest. What a person should do, it could be argued, is consider the matter selfishly—to vote based on what is in her interest regardless of the general good.

This approach does have considerable appeal and is based on an established moral philosophy, known as ethical egoism. This is the view that a person should always take the action that maximizes her self-interest. Roughly put, for the ethical egoist, she is the only one with moral value. The opposing moral view is altruism; the view that other people count morally. Ayn Rand is probably the best known proponent of ethical egoism and the virtue of selfishness. This ideology has also been embraced by Paul Ryan and explicitly by many in the American Tea Party.

While supporters of selfishness claim that the collective result of individual selfishness will be the general good (a view advanced by Adam Smith), history and reason show the opposite. Everyone being selfish has exactly the result one would suspect—most people are worse off than they would be if people were more altruistic. To use an analogy, everyone being cruel does not make the world a kinder place. More people being kind makes it a kinder place.

This is not to say that people should not consider their interests, just that they should also consider the interests of others. This is, after all, what makes civilization possible. Pure selfishness without regulation, as Hobbes argued, is the state of nature and the state of war—which is not in anyone’s interest.

It can also be objected that my approach is flawed because it perpetuates the two party lockdown of the American political system. While many people are unaware of this, there are many third party candidates running in 2016. Perhaps the best known is libertarian Gary Johnson. He received 1% of the popular vote in 2012 and is polling in the double digits in some polls. It is all but certain that he will not win, thus a vote for Johnson merely helps either Trump or Hillary get elected (depending on whether the person would have otherwise voted for one of them). Nader’s ill-fated bid for president enabled Bush to win the election, something that is often regarded as a disaster (but, to be fair, Al Gore might have done worse). While voting for a third party candidate can be seen as, at best, throwing away one’s vote a case can be made for voting this way.

Like the approach I took in the interview, the argument for voting third party can be based on utilitarian considerations (one can also make a deontological argument based on the notion of a duty to vote one’s conscience). The difference is that the vote for the third party would be justified by the hope of long term consequences. To be specific, the justification would be that voting for a third party candidate could allow the greater evil to win this election. And the next election. And probably several more elections after that. But, eventually, the lockdown on politics by Democrats and Republicans could be broken by a viable third party. If the third party is likely to be better than the Democrats or Republicans, then this could be a good utilitarian argument.  It could also be a good argument if having a viable third party merely improved things for the population. The deciding factor would be whether or not the positive consequences of eventually getting a viable third party would be worth the cost of getting there. Naturally, the likelihood of viability is also a factor.

I am split on this issue. On the one hand, there seems to be a good reason to stick with voting for the lesser evil, namely the fact that third party viability is quite a gamble. There is also the concern about whether any third party candidate is better than a lesser evil. On the other hand, voting for the lesser evil does lock us in the two party system and this could prove more damaging than allowing the greater evil to win numerous times on the way towards having a viable third party.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Silencer

English: NRA (National Recovery Administration...

Put a bit simply, a silencer is a device attached to a gun for the purpose of suppressing the sound it makes. This is usually done to avoid drawing attention to the shooter. This makes an excellent analogy for what happens to proposals for gun regulation: the sound is quickly suppressed so as to ensure that attention moves on to something new.

Part of this suppression is deliberate. After each mass shooting, the NRA and other similar groups step up pressure on the politicians they influence to ensure that new regulations are delayed, defeated or defanged. While it is tempting to cast the NRA as a nefarious player that subverts democracy, the truth seems to be that the NRA has mastered the democratic process: it organizes and guides very motivated citizens to give money (which is used to lobby politicians) and to contact their representatives in the government. This has proven vastly more effective than protests, sit-ins and drum circles. While it is true that the NRA represents but a fraction of the population, politics is rather like any sport: you have to participate to win. While most citizens do not even bother to vote, NRA member turnout is apparently quite good—thus they gain influence by voting. This is, of course, democracy. Naturally, another tale could be told of the NRA and its power and influence. A tale that presents the NRA and its members as subverting the will of the majority.

Certain pundits and politicians also engage in suppression. One standard tactic is, after a shooting, to claim that it is “too soon” to engage in discussion and lawmaking. Rather, the appropriate response involves moments of silence and prayer. While it is appropriate to pay respects to the wounded and dead, there is a difference between doing this and trying to run out the clock with this delaying tactic. Those that use it know quite well that if the discussion can be delayed, interest will fade and along with it the chances of any action being taken.

It is, in fact, appropriate to take action as soon as possible. To use the obvious analogy, if a fire is ravaging through a neighborhood, then the time to put out that fire is now. This way there will be less need of moments of silence and prayers for victims.

Another stock tactic is to accuse those proposing gun regulation of playing politics and exploiting the tragedy for political points or to advance an agenda. This approach can have some moral merit—if a person is engaged in a Machiavellian exploitation of some awful event (be it a mass shooting, a terrorist attack or a wave of food poisoning) without any real concern for the suffering of others, then that person would be morally awful. That said, the person could still be acting rightly, albeit for all the wrong reasons. This would be in terms of the consequences, which could be quite good despite the problematic motivations. For example, if a politician cynically exploited the harm inflicted by lead contaminated water in order to gain national attention, then that person would hardly be a good person. However, if this resulted in changes that significantly reduced lead poisoning in the United States, then consequences would certainly seem good and desirable.

It is also worth considering that using an awful event to motivate change for the better could result from laudable motives and a recognition of how human psychology generally works. To use an analogy, a person who loves someone who just suffered from a lifestyle inflicted heart attack could use that event to get the person to change her lifestyle and do so for commendable reasons. After all, people are most likely to do something when an awful event is fresh in their minds; hence this is actually the ideal time to address a problem—which leads to the final part of the discussion.

Although active suppression can be an effective tactic, it often relies on the fact that interest in a matter fades as time passes—this is why those opposed to new gun regulation use delaying tactics. They know that public attention will shift and fade.

On the one hand, the human tendency to lose interest can be regarded as a bad thing. As Merlin said in Excalibur, “for it is the doom of men that they forget.” In the case of mass shootings and gun violence, people quickly forget an incident—at least until another incident reminds them. This allows a problem to persist and is why action needs to be taken as soon as possible.

On the other hand, our forgetting is often our salvation. If the memory of fear and pain did not fade over time, they would be as wounds that did not heal. Just as a person would bleed to death physically from wounds that never healed, a person would bleed out emotionally if memory did not fade.

To use another analogy, if the mind is like a ship and memory is like a cargo, just as a ship that could never lighten its load would plunge to the ocean floor, a person that could never lighten her emotional load would be dragged into the great abyss of emotions and thus be ruined. Thus, forgetting is both our doom and our salvation. Of course, we would have far less need to forget if we remembered what we need to fix. And fixed it.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Modern Philosophy

Portrait of René Descartes, dubbed the "F...

Here is a (mostly) complete course in Modern Philosophy.

Notes & Readings

Modern Readings SP 2014

Modern Notes SP 2014

Modern Philosophy Part One (Hobbes & Descartes)

#1 This is the unedited video from the 1/7/2016 Modern class. It covers the syllabus and some of the historical background for the Modern era.

#2 This is the unedited video from the 1/12/2016 Modern philosophy class. It concludes the background for the modern era and the start of argument basics.

#3 This is the unedited video from the 1/14/2016 modern philosophy class. It covers the analogical argument, the argument by example, the argument from authority, appeal to intuition, and the background for Thomas Hobbes.

#4 This is the unedited video from the 1/19/2016 Modern Philosophy class. It covers Thomas Hobbes.

#5 This is the unedited video from the 1/21/2016 Modern Philosophy. It covers Descartes’ first meditation as well as the paper for the class

#6 This is the unedited video from the 1/26/2016 Modern class. In covers Descartes’ Meditations II & III.

#7 This is the unedited video from the 1/28/2016 Modern Philosophy course. It covers Descartes’ Meditations 4-6 and more about Descartes.

Modern Philosophy Part Two (Spinoza & Leibniz)

#8 This is the unedited video from the 2/2/2016 Modern Philosophy class. It covers the start of Spinoza’s philosophy. It could not be otherwise.

#9 No Video

#10 This is the unedited video from the 2/9/2016 Modern Philosophy class. It covers Spinoza.

#11 This is the unedited video from the 2/11/2016 Modern Philosophy class. It covers the end of Spinoza and the start of Leibniz.

#12 This is the unedited video from the 2/16/2016 Modern philosophy class. It covers Leibniz.

#13  This is the unedited video from the 2/18/2016 Modern philosophy class. It covers Leibniz addressing the problem of evil and the start of monads.

#14 This is the unedited video from the 2/23/2016 Modern philosophy class. It covers Leibniz’s monads, pre-established harmony and the city of God.

#15 This is the unedited video from the 2/25/2016 Modern philosophy class. It covers the end of Leibniz and the start of the background for the Enlightenment.

Modern Philosophy Part Three (Locke & Berkeley)

#16 This is the unedited video from the 3/1/2016 Modern Philosophy Class. It finishes the enlightenment background and the start of John Locke.

#17 This is the unedited video from the 3/3/2016 Modern Philosophy class. It covers John Locke’s epistemology.

#18 This is the unedited video from the 3/15/2016 Modern Philosophy class. It includes a recap of Locke’s reply to skepticism and the start of his theory of personal identity.

#19 No Video

#20 This is the unedited video from the 3/22/2016 Modern Philosophy class. It covers Locke’s political philosophy.

#21 This is the unedited video from the 3/29/2016 Modern Philosophy class. It covers the first part of George Berkeley’s immaterialism.

#22 This unedited video is from the 3/31/2016 Modern Philosophy class. It covers the final part of Berkeley, including his arguments for God as well as the classic problems with his theory.

Modern Philosophy Part Four (Hume & Kant)

#23 This is the unedited video from the 4/5/2016 Modern Philosophy class. It covers the introduction to David Hume and his theory of necessary connections.

#24 This is the unedited video from the 4/7/2016 Modern philosophy class. It covers Hume’s skepticism regarding the senses.

#25 This is the unedited video from the 4/12/2016 Modern Philosophy class. It covers David Hume’s theory of personal identity, ethical theory and theory of religion.

#26 This is the unedited video from the 4/19/2016 Modern Philosophy class. It covers Kant’s philosophy.

#27 This is the unedited video from the 4/19/2016 Modern class. It covers Kant’s epistemology and metaphysics.

#28 This is the unedited video from the 4/21/2016 Modern Philosophy class. It covers Kant’s antinomies, God, and the categorical imperative

 

Denmark’s Refugee “Fee”

In January, 2016 Denmark passed a law that refugees who enter the state with assets greater than about US $1,450 will have their valuables taken in order to help pay for the cost of their being in the country. In response to international criticism, Denmark modified the law to allow refugees to keep items of sentimental value, such as wedding rings. This matter is certainly one of moral concern.

Critics have been quick to deploy a Nazi analogy, likening this policy to how the Nazis stole the valuables of those they sent to the concentration camps. While taking from refugees does seem morally problematic, the Nazi analogy does not really stick—there are too many relevant differences between the situations. Most importantly, the Danes would be caring for the refugees rather than murdering them. There is also the fact that the refugees are voluntarily going to Denmark rather than being rounded up, robbed, imprisoned and murdered. While the Danes have clearly not gone full Nazi, there are still grounds for moral criticism. However, I will endeavor to provide a short defense of the law—a rational consideration requires at least considering the pro side of the argument.

The main motivation of the law seems to be to deter refugees from coming to Denmark. This is a strategy of making their country less appealing than other countries in the hopes that refugees will go somewhere else and be someone else’s burden. Countries, like individuals, do seem to have the right to make themselves less appealing.  While this sort of approach is certainly not morally commendable, it does not seem to be morally wrong. After all, the Danes are not simply banning refugees but trying to provide a financial disincentive. Somewhat ironically, the law would not deter the poorest of refugees. It would only deter those who have enough property to make losing it a worthwhile deterrent.

The main moral argument in favor of the law is based on the principle that people should help pay for the cost of their upkeep to at least the degree they can afford to do so. To use an analogy, if people show up at my house and ask to live with me and eat my food, it would certainly be fair of me to expect them to at least chip in for the costs of the utilities and food. After all, I do not get my utilities and food for free. This argument does have considerable appeal, but can be countered.

One counter to the argument is based on the fact that the refugees are fleeing a disaster. Going back to the house analogy, if survivors of a disaster showed up at my door asking for a place to stay until they could get back on their feet, taking their few remaining possessions to offset the cost of their food and shelter would seem to be cruel and heartless. They have lost so much already and to take what little that remains to them would add injury and insult to injury. To use another analogy, it would be like a rescue crew stripping people of their valuables to help pay for the rescue. While rescues are expensive, such a practice certainly would seem awful.

One counter is that refugees who are well off should pay for what they receive. After all, if relatively well-off people showed up at my door asking for food and shelter, it would not seem wrong of me to expect that they contribute to the cost of things. After all, if they can afford it, then they have no grounds to claim a free ride off me. Likewise for well-off refugees. That said, the law does not actually address the point, unless having more than $1450 is well off.

Another point of consideration is that it is one thing to have people pay for lodging and food with money they have; quite another to take a person’s remaining worldly possessions. It seems like a form of robbery, using whatever threat drove the refugees from home as the weapon. The obvious reply is that the refugees would be choosing to go to Denmark; they could go to a more generous country. The problem is, however, that refugees might soon have little choice about where they go.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Against accommodationism: How science undermines religion

Faith versus Fact
There is currently a fashion for religion/science accommodationism, the idea that there’s room for religious faith within a scientifically informed understanding of the world.

Accommodationism of this kind gains endorsement even from official science organizations such as, in the United States, the National Academy of Sciences and the American Association for the Advancement of Science. But how well does it withstand scrutiny?

Not too well, according to a new book by distinguished biologist Jerry A. Coyne.

Gould’s magisteria

The most famous, or notorious, rationale for accommodationism was provided by the celebrity palaeontologist Stephen Jay Gould in his 1999 book Rocks of Ages. Gould argues that religion and science possess separate and non-overlapping “magisteria”, or domains of teaching authority, and so they can never come into conflict unless one or the other oversteps its domain’s boundaries.

If we accept the principle of Non-Overlapping Magisteria (NOMA), the magisterium of science relates to “the factual construction of nature”. By contrast, religion has teaching authority in respect of “ultimate meaning and moral value” or “moral issues about the value and meaning of life”.

On this account, religion and science do not overlap, and religion is invulnerable to scientific criticism. Importantly, however, this is because Gould is ruling out many religious claims as being illegitimate from the outset even as religious doctrine. Thus, he does not attack the fundamentalist Christian belief in a young earth merely on the basis that it is incorrect in the light of established scientific knowledge (although it clearly is!). He claims, though with little real argument, that it is illegitimate in principle to hold religious beliefs about matters of empirical fact concerning the space-time world: these simply fall outside the teaching authority of religion.

I hope it’s clear that Gould’s manifesto makes an extraordinarily strong claim about religion’s limited role. Certainly, most actual religions have implicitly disagreed.

The category of “religion” has been defined and explained in numerous ways by philosophers, anthropologists, sociologists, and others with an academic or practical interest. There is much controversy and disagreement. All the same, we can observe that religions have typically been somewhat encyclopedic, or comprehensive, explanatory systems.

Religions usually come complete with ritual observances and standards of conduct, but they are more than mere systems of ritual and morality. They typically make sense of human experience in terms of a transcendent dimension to human life and well-being. Religions relate these to supernatural beings, forces, and the like. But religions also make claims about humanity’s place – usually a strikingly exceptional and significant one – in the space-time universe.

It would be naïve or even dishonest to imagine that this somehow lies outside of religion’s historical role. While Gould wants to avoid conflict, he creates a new source for it, since the principle of NOMA is itself contrary to the teachings of most historical religions. At any rate, leaving aside any other, or more detailed, criticisms of the NOMA principle, there is ample opportunity for religion(s) to overlap with science and come into conflict with it.

Coyne on religion and science

The genuine conflict between religion and science is the theme of Jerry Coyne’s Faith versus Fact: Why Science and Religion are Incompatible (Viking, 2015). This book’s appearance was long anticipated; it’s a publishing event that prompts reflection.

In pushing back against accommodationism, Coyne portrays religion and science as “engaged in a kind of war: a war for understanding, a war about whether we should have good reasons for what we accept as true.” Note, however, that he is concerned with theistic religions that include a personal God who is involved in history. (He is not, for example, dealing with Confucianism, pantheism or austere forms of philosophical deism that postulate a distant, non-interfering God.)

Accommodationism is fashionable, but that has less to do with its intellectual merits than with widespread solicitude toward religion. There are, furthermore, reasons why scientists in the USA (in particular) find it politically expedient to avoid endorsing any “conflict model” of the relationship between religion and science. Even if they are not religious themselves, many scientists welcome the NOMA principle as a tolerable compromise.

Some accommodationists argue for one or another very weak thesis: for example, that this or that finding of science (or perhaps our scientific knowledge base as a whole) does not logically rule out the existence of God (or the truth of specific doctrines such as Jesus of Nazareth’s resurrection from the dead). For example, it is logically possible that current evolutionary theory and a traditional kind of monotheism are both true.

But even if we accept such abstract theses, where does it get us? After all, the following may both be true:

1. There is no strict logical inconsistency between the essentials of current evolutionary theory and the existence of a traditional sort of Creator-God.

AND

2. Properly understood, current evolutionary theory nonetheless tends to make Christianity as a whole less plausible to a reasonable person.

If 1. and 2. are both true, it’s seriously misleading to talk about religion (specifically Christianity) and science as simply “compatible”, as if science – evolutionary theory in this example – has no rational tendency at all to produce religious doubt. In fact, the cumulative effect of modern science (not least, but not solely, evolutionary theory) has been to make religion far less plausible to well-informed people who employ reasonable standards of evidence.

For his part, Coyne makes clear that he is not talking about a strict logical inconsistency. Rather, incompatibility arises from the radically different methods used by science and religion to seek knowledge and assess truth claims. As a result, purported knowledge obtained from distinctively religious sources (holy books, church traditions, and so on) ends up being at odds with knowledge grounded in science.

Religious doctrines change, of course, as they are subjected over time to various pressures. Faith versus Fact includes a useful account of how they are often altered for reasons of mere expediency. One striking example is the decision by the Mormons (as recently as the 1970s) to admit blacks into its priesthood. This was rationalised as a new revelation from God, which raises an obvious question as to why God didn’t know from the start (and convey to his worshippers at an early time) that racial discrimination in the priesthood was wrong.

It is, of course, true that a system of religious beliefs can be modified in response to scientific discoveries. In principle, therefore, any direct logical contradictions between a specified religion and the discoveries of science can be removed as they arise and are identified. As I’ve elaborated elsewhere (e.g., in Freedom of Religion and the Secular State (2012)), religions have seemingly endless resources to avoid outright falsification. In the extreme, almost all of a religion’s stories and doctrines could gradually be reinterpreted as metaphors, moral exhortations, resonant but non-literal cultural myths, and the like, leaving nothing to contradict any facts uncovered by science.

In practice, though, there are usually problems when a particular religion adjusts. Depending on the circumstances, a process of theological adjustment can meet with internal resistance, splintering and mutual anathemas. It can lead to disillusionment and bitterness among the faithful. The theological system as a whole may eventually come to look very different from its original form; it may lose its original integrity and much of what once made it attractive.

All forms of Christianity – Catholic, Protestant, and otherwise – have had to respond to these practical problems when confronted by science and modernity.

Coyne emphasizes, I think correctly, that the all-too-common refusal by religious thinkers to accept anything as undercutting their claims has a downside for believability. To a neutral outsider, or even to an insider who is susceptible to theological doubts, persistent tactics to avoid falsification will appear suspiciously ad hoc.

To an outsider, or to anyone with doubts, those tactics will suggest that religious thinkers are not engaged in an honest search for truth. Rather, they are preserving their favoured belief systems through dogmatism and contrivance.

How science subverted religion

In principle, as Coyne also points out, the important differences in methodology between religion and science might (in a sense) not have mattered. That is, it could have turned out that the methods of religion, or at least those of the true religion, gave the same results as science. Why didn’t they?

Let’s explore this further. The following few paragraphs are my analysis, drawing on earlier publications, but I believe they’re consistent with Coyne’s approach. (Compare also Susan Haack’s non-accommodationist analysis in her 2007 book, Defending Science – within Reason.)

At the dawn of modern science in Europe – back in the sixteenth and seventeenth centuries – religious worldviews prevailed without serious competition. In such an environment, it should have been expected that honest and rigorous investigation of the natural world would confirm claims that were already found in the holy scriptures and church traditions. If the true religion’s founders had genuinely received knowledge from superior beings such as God or angels, the true religion should have been, in a sense, ahead of science.

There might, accordingly, have been a process through history by which claims about the world made by the true religion (presumably some variety of Christianity) were successively confirmed. The process might, for example, have shown that our planet is only six thousand years old (give or take a little), as implied by the biblical genealogies. It might have identified a global extinction event – just a few thousand years ago – resulting from a worldwide cataclysmic flood. Science could, of course, have added many new details over time, but not anything inconsistent with pre-existing knowledge from religious sources.

Unfortunately for the credibility of religious doctrine, nothing like this turned out to be the case. Instead, as more and more evidence was obtained about the world’s actual structures and causal mechanisms, earlier explanations of the appearances were superseded. As science advances historically, it increasingly reveals religion as premature in its attempts at understanding the world around us.

As a consequence, religion’s claims to intellectual authority have become less and less rationally believable. Science has done much to disenchant the world – once seen as full of spiritual beings and powers – and to expose the pretensions of priests, prophets, religious traditions, and holy books. It has provided an alternative, if incomplete and provisional, image of the world, and has rendered much of religion anomalous or irrelevant.

By now, the balance of evidence has turned decisively against any explanatory role for beings such as gods, ghosts, angels, and demons, and in favour of an atheistic philosophical naturalism. Regardless what other factors were involved, the consolidation and success of science played a crucial role in this. In short, science has shown a historical, psychological, and rational tendency to undermine religious faith.

Not only the sciences!

I need to be add that the damage to religion’s authority has come not only from the sciences, narrowly construed, such as evolutionary biology. It has also come from work in what we usually regard as the humanities. Christianity and other theistic religions have especially been challenged by the efforts of historians, archaeologists, and academic biblical scholars.

Those efforts have cast doubt on the provenance and reliability of the holy books. They have implied that many key events in religious accounts of history never took place, and they’ve left much traditional theology in ruins. In the upshot, the sciences have undermined religion in recent centuries – but so have the humanities.

Coyne would not tend to express it that way, since he favours a concept of “science broadly construed”. He elaborates this as: “the same combination of doubt, reason, and empirical testing used by professional scientists.” On his approach, history (at least in its less speculative modes) and archaeology are among the branches of “science” that have refuted many traditional religious claims with empirical content.

But what is science? Like most contemporary scientists and philosophers, Coyne emphasizes that there is no single process that constitutes “the scientific method”. Hypothetico-deductive reasoning is, admittedly, very important to science. That is, scientists frequently make conjectures (or propose hypotheses) about unseen causal mechanisms, deduce what further observations could be expected if their hypotheses are true, then test to see what is actually observed. However, the process can be untidy. For example, much systematic observation may be needed before meaningful hypotheses can be developed. The precise nature and role of conjecture and testing will vary considerably among scientific fields.

Likewise, experiments are important to science, but not to all of its disciplines and sub-disciplines. Fortunately, experiments are not the only way to test hypotheses (for example, we can sometimes search for traces of past events). Quantification is also important… but not always.

However, Coyne says, a combination of reason, logic and observation will always be involved in scientific investigation. Importantly, some kind of testing, whether by experiment or observation, is important to filter out non-viable hypotheses.

If we take this sort of flexible and realistic approach to the nature of science, the line between the sciences and the humanities becomes blurred. Though they tend to be less mathematical and experimental, for example, and are more likely to involve mastery of languages and other human systems of meaning, the humanities can also be “scientific” in a broad way. (From another viewpoint, of course, the modern-day sciences, and to some extent the humanities, can be seen as branches from the tree of Greek philosophy.)

It follows that I don’t terribly mind Coyne’s expansive understanding of science. If the English language eventually evolves in the direction of employing his construal, nothing serious is lost. In that case, we might need some new terminology – “the cultural sciences” anyone? – but that seems fairly innocuous. We already talk about “the social sciences” and “political science”.

For now, I prefer to avoid confusion by saying that the sciences and humanities are continuous with each other, forming a unity of knowledge. With that terminological point under our belts, we can then state that both the sciences and the humanities have undermined religion during the modern era. I expect they’ll go on doing so.

A valuable contribution

In challenging the undeserved hegemony of religion/science accommodationism, Coyne has written a book that is notably erudite without being dauntingly technical. The style is clear, and the arguments should be understandable and persuasive to a general audience. The tone is rather moderate and thoughtful, though opponents will inevitably cast it as far more polemical and “strident” than it really is. This seems to be the fate of any popular book, no matter how mild-mannered, that is critical of religion.

Coyne displays a light touch, even while drawing on his deep involvement in scientific practice (not to mention a rather deep immersion in the history and detail of Christian theology). He writes, in fact, with such seeming simplicity that it can sometimes be a jolt to recognize that he’s making subtle philosophical, theological, and scientific points.

In that sense, Faith versus Fact testifies to a worthwhile literary ideal. If an author works at it hard enough, even difficult concepts and arguments can usually be made digestible. It won’t work out in every case, but this is one where it does. That’s all the more reason why Faith versus Fact merits a wide readership. It’s a valuable, accessible contribution to a vital debate.

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

Yoga & Cultural Appropriation

Homo sum, humani nihil a me alienum puto.

-Terence

In the fall of 2015, a free yoga class at the University of Ottawa was suspended out of concern that it might be an act of cultural appropriation. Staff at the Centre for Students with Disabilities, where the class was offered, made this decision on the basis of a complaint.  A Centre official noted that many cultures, including the culture from which yoga originated, “have experienced oppression, cultural genocide and diasporas due to colonialism and western supremacy … we need to be mindful of this and how we express ourselves while practising yoga.”  In response, there was an attempt to “rebrand” the class as “mindful stretching.” Due to issues regarding a French translation of the phrase, the rebranding failed and the class was suspended.

When I first heard about his story, I inferred it was satire on the part of the Onion because it seemed to be an absurd lampooning of political correctness. It turned out that it was real, but still absurd. But, as absurdities sometimes do, it does provide an interesting context for discussing a serious subject—in this case that of cultural appropriation.

The concept of cultural appropriation is somewhat controversial, but the basic idea is fairly simple. In general terms, cultural appropriation takes place when a dominant culture takes (“appropriates”) from a marginalized culture for morally problematic reasons. For example, white college students have been accused of cultural appropriation (and worse) when they have made mocking use of aspects of black culture for theme parties. Some on the left (or “the politically correct” as they are called by their detractors) regard cultural appropriation as morally wrong. Some on the right think the idea of cultural appropriation is ridiculous and people should just get over and forget about past oppressions.

While I am no fan of what can justly be considered mere political correctness, I do agree that there are moral problems with what is often designated as cultural appropriation. One common area of cultural appropriation is that which is intended to lampoon. While comedy, as Aristotle noted, is a species of the ugly, it should not enter into the realm of what is actually hurtful. As such lampooning of cultural stereotypes that cross over into being actually hurtful would cease to be comedic and would instead be merely insulting mockery. An excellent (or awful) example of this would be the use of blackface by people who are not black. Naturally, specific cases would need to be given due consideration—it can be aesthetically legitimate to use the shock of apparent cultural appropriation to make a point.

It can, of course, be objected that lampooning is exempt from the usual moral concerns about insulting people and thus that such mocking insults would be morally fine. It must also be noted that I am making no assertions here about what should be forbidden by law. My view is, in fact, that even the most insulting mockery should not be restricted by law. Morality is, after all, distinct from legality.

Another common area of cultural appropriation is the misuse of symbols from a culture. For example, having an underwear model prance around in a war bonnet is not intended as lampooning, but is an insult to the culture that regards the war bonnet as an honor to be earned. It would be comparable to having underwear models prancing around displaying unearned honors such as the Purple Heart or the Medal of Honor. This misuse can, of course, be unintentional—people often use cultural marks of honor as “cool accessories” without any awareness of what they actually mean. While people should, perhaps, do some research before borrowing from other cultures, innocent ignorance is certainly forgivable.

It could be objected that such misuse is not morally problematic since there is no real harm being done when a culture is insulted by the misuse of its symbols. This, of course, would need to be held to consistently—a person making this argument to allow the misuse of the symbols of another culture would need to accept a comparable misuse of her own most sacred symbols as morally tolerable. Once again, I am not addressing the legality of this matter—although cultures do often have laws protecting their own symbols, such as military medals or religious icons.

While it would be easy to run through a multitude of cases that would be considered cultural appropriation, I prefer to focus on presenting a general principle about what would be morally problematic cultural appropriation. Given the above examples and consideration of the others that can be readily found, what seems to make appropriation inappropriate is the misuse or abuse of the cultural elements. That is, there needs to be meaningful harm inflicted by the appropriation. This misuse or abuse could be intentional (which would make it morally worse) or unintentional (which might make it an innocent error of ignorance).

It could be contended that any appropriation of culture is harmful by using an analogy to trademark, patent, and copyright law. A culture could be regarded as holding the moral “trademark”, “patent” or “copyright” (as appropriate) on its cultural items and thus people who are not part of that culture would be inflicting harm by appropriating these items. This would be analogous to another company appropriating, for example, Disney’s trademarks, violating the copyrights held by Random House or the patents held by Google. Culture could be thus regarded as a property owned by members of that culture and passed down as a matter of inheritance. This would seem to make any appropriation of culture by outsiders morally problematic—although a culture could give permission for such use by intentionally sharing the culture. Those who are fond of property rights should find this argument appealing.

One interesting way to counter the ownership argument is to note that humans are born into culture by chance and any human could be raised in any culture. As such, it could be claimed that humans have an ownership stake in all human cultures and thus are entitled to adopt culture as they see fit. The culture should, of course, be shown proper respect. This would, of course, be a form of cultural communism—which those who like strict property rights might find unappealing.

The response to this is to note that humans are also born by chance to families and any human could be designated the heir of a family, yet there are strict rules governing the inheritance of property. As such, cultural inheritance could work the same way—only the true heirs can give permission to others to use the culture. This should appeal to those who favor strict protections for inherited property.

My own inclination is that humans are the inheritors of all human culture and thus we all have a right to the cultural wealth our species has produced.  Naturally, individual ownership of specific works should be properly respected. However, as with any gift, it must be treated with due respect and used appropriately—rather than misused through appropriation. So, cancelling the yoga class was absurd.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter