Category Archives: Critical Thinking

The Incredible Shifting Hillary

When supporters of Donald Trump are asked why they back him, the most common answers are that Trump “tells it like it is” and that he is “authentic.” When people who dislike Hillary are asked why, they often refer to her ever shifting positions and that she just says what she thinks people want to hear.

Given that Trump has, at best, a distant relation with the truth it is somewhat odd that he is seen as telling it like it is. He may be authentic, but he is most assuredly telling it like it is not. While Hillary has shifted positions, she has a far closer relationship to the truth (although still not a committed one). Those who oppose Hillary tend to focus on these shifts in making the case against her. Her defenders endeavor to minimize the impact of these claims or boldly try to make a virtue of said shifting. Given the importance of the shifting, this a matter well worth considering.

While the extent of Hillary’s shifting can be debated, the fact that she has shifted on major issues is a matter of fact. Good examples of shifts include the second Iraq War, free trade, same-sex marriage and law enforcement. While many are tempted to claim that the fact that she has shifted her views on such issues proves she is wrong now, doing this would be to fall victim to the classic ad hominem tu quoque fallacy. This is an error in reasoning in which it is inferred that a person’s current view or claim is mistaken because they have held to a different view or claim in the past. While two inconsistent claims cannot be true at the same time, pointing out that a person’s current claim is inconsistent with a past claim does not prove which claim is not true (and both could actually be false). After all, the person could have been wrong then while being right now. Or vice versa. Or wrong in both cases. Because of this, it cannot be inferred that Hillary’s views are wrong now simply because she held opposite views in the past.

While truth is important, the main criticism of Hillary’s shifting is not that she has moved from a correct view to an erroneous view. Rather, the criticism is that she is shifting her expressed views to match whatever she thinks the voters want to hear. That is, she is engaged in pandering.

Since pandering is a common practice in politics, it seems reasonable to hold that it is unfair to single Hillary out for special criticism. This does not, of course defend the practice. To accept that being common justifies a practice would be to fall victim to the common practice fallacy. This is an error in reasoning in which a practice is defended by asserting it is a common one. Obviously enough, the mere fact that something is commonly done does not entail that it is good or justified. That said, if a practice is common yet wrong, it is still unfair to single out a specific person for special criticism for engaging in that practice. Rather, all those that engage in the practice should be criticized.

It could be argued that while pandering is a common practice, Hillary does warrant special criticism because her shifting differs in relevant and significant ways from the shifting of others. This could be a matter of volume (she shifts more than others), content (she shifts on more important issues), extent (she shifts to a greater degree) or some other factors. While judging the nature and extent of shifts does involve some subjective assessment, these factors can be evaluated with a reasonable degree of objectivity—although partisan influences can interfere with this. Since Hillary is generally viewed through the lenses of intense partisanship, I will not endeavor to address this matter—it is unlikely that anything I could write would sway partisan opinions. I will, however, address the ethics of shifting.

While there is a tendency to regard position shifting with suspicion, there are cases in which is not only acceptable, but laudable. These are cases in which the shift is justified by evidence or reasoning that warrants such a shift. For example, I was a theoretical anarchist for a while in college: I believed that the best government was the least government and preferably none at all. However, reading Locke, Hobbes and others as well as gaining a better understanding of how humans actually behave resulted in a shift in my position. I am no longer an anarchist on the grounds that the position is not well supported. To use another example, I went through a phase in which I was certain in my atheism. However, arguments made by Hume and Kant changed my view regarding the possibility of such certainty. As a final example, I used to believe in magical beings like the Easter Bunny and Santa Claus. However, the evidence of their nonexistence convinced me to shift my view. In all these cases the shifts are laudable: I changed my view because of considered evidence and argumentation. While there can be considerable debate about what counts as good evidence or reasoning for a shift, the basic principle seems sound. A person should believe what is best supported by evidence and reasoning and this often changes over time.

Turning back to Hillary, if she has shifted her views on the basis of evidence and reasoning that justly support her new views, then she should not be condemned for the shift. For example, if she believed in the approach to crime taken by her husband when he was President, but has changed her view in the face of evidence that this view is flawed, then her change would be quite reasonable. As might be expected, her supporters tend to claim this is why she changes her views. The challenge is to show that this is the case. Her critics typically claim that the reason for her shifts is to match what she thinks will get her the most votes, which leads to the question of whether this is a bad thing or not.

A very reasonable concern about a politician who just says what she thinks the voters want to hear is that the person lacks principles, so that the voters do not really know who they are voting for. As such, they cannot make a good decision regarding what the politician would actually do in office.

A possible reply to this is that a politician who shifts her views to match those of the voters is exactly what people should want in a representative democracy: the elected officials should act in accord with the will of the people. This does raise the broad subject of the proper function of an elected official: to do the will of the people, to do what they said they would do, to act in accord with their character and principles or something else. This goes beyond the limited scope of the essay, but the answer is rather critical to determining whether Hillary’s shifting is a good or bad thing. If politicians should act on their own principles and views rather than doing what the people want them to do, then there would seem to be good grounds for criticizing any politician whose own views are not those of the people.

A final interesting point is to argue that Hillary should not be criticized for shifting her views to match those that are now held by the majority of people (or majority of Democrats). If other people can shift their views on these matters over time in ways that are acceptable, then the same should apply to Hillary. For example, when Hillary was against same-sex marriage that was the common view in the country. Now, most Americans are fine with it—and so is Hillary. Her defenders assert that she, like most Americans, has changed her views over time in the face of changing social conditions. Her detractors claim she is merely pandering and has no commitment beyond achieving power. This is a factual matter, albeit one that is hard to settle without evidence as to what is really going on in her mind. After all, a mere change in her view to match the general view is consistent with both unprincipled pandering and a reasoned change in a position that has evolved with the times.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The shame of public shaming

Russell Blackford, University of Newcastle

Public shaming is not new. It’s been used as a punishment in all societies – often embraced by the formal law and always available for day-to-day policing of moral norms. However, over the past couple of centuries, Western countries have moved away from more formal kinds of shaming, partly in recognition of its cruelty.

Even in less formal settings, shaming individuals in front of their peers is now widely regarded as unacceptable behaviour. This signifies an improvement in the moral milieu, but its effect is being offset by the rise of social media and, with it, new kinds of shaming.

Indeed, as Welsh journalist and documentary maker Jon Ronson portrays vividly in his latest book, social media shaming has become a social menace. Ronson’s So You’ve Been Publicly Shamed (Picador, 2015) is a timely contribution to the public understanding of an emotionally charged topic.

Shaming is on the rise. We’ve shifted – much of the time – to a mode of scrutinising each other for purity. Very often, we punish decent people for small transgressions or for no real transgressions at all. Online shaming, conducted via the blogosphere and our burgeoning array of social networking services, creates an environment of surveillance, fear and conformity.

The making of a call-out culture

I noticed the trend – and began to talk about it – around five years ago. I’d become increasingly aware of cases where people with access to large social media platforms used them to “call out” and publicly vilify individuals who’d done little or nothing wrong. Few onlookers were prepared to support the victims. Instead, many piled on with glee (perhaps to signal their own moral purity; perhaps, in part, for the sheer thrill of the hunt).

Since then, the trend to an online call-out culture has continued and even intensified, but something changed during 2015. Mainstream journalists and public intellectuals finally began to express their unease.

There’s no sign that the new call-out culture is fading away, but it’s become a recognised phenomenon. It is now being discussed more openly, and it’s increasingly questioned. That’s partly because even its participants – people who assumed it would never happen to them – sometimes find themselves “called out” for revealing some impurity of thought. It’s become clear that no moral or political affiliation holds patents on the weaponry of shaming, and no one is immune to its effects.

As Ronson acknowledges, he has, himself, taken part in public shamings, though the most dramatic episode was a desperate act of self-defence when a small group of edgy academics hijacked his Twitter identity to make some theoretical point. Shame on them! I don’t know what else he could have done to make them back down.

That, however, was an extreme and peculiar case. It involved ongoing abuse of one individual by others who refused to “get” what they were doing to distress him, even when asked to stop. Fascinating though the example is, it is hardly a precedent for handling more common situations.

At one time, if we go along with Ronson, it felt liberating to speak back in solidarity against the voices of politicians, corporate moguls, religious leaders, radio shock jocks, newspaper columnists and others with real power or social influence.

But there can be a slippery slope… from talking back in legitimate ways against, say, a powerful journalist (criticising her views and arguments, and any abusive conduct), to pushing back in less legitimate ways (such as attempting to silence her viewpoint by trying to get her fired), to destroying relatively powerless individuals who have done nothing seriously wrong.

Slippery slope arguments have a deservedly bad reputation. But some slopes really are slippery, and some slippery slope arguments really are cogent. With public online shaming, we’ve found ourselves, lately, on an especially slippery slope. In more ways than one, we need to get a grip.

Shaming the shamers

Ronson joined in a campaign of social media shaming in October 2009: one that led to some major advertisers distancing themselves from the Daily Mail in the UK. This case illustrates some problems when we discuss social media shaming, so I’ll give it more analysis than Ronson does.

One problem is that, as frequently happens, it was a case of “shame the shamer”. The recipient of the shaming was especially unsympathetic because she was herself a public shamer of others.

The drama followed a distasteful – to say the least – column by Jan Moir, a British journalist with a deplorable modus operandi. Moir’s topic was the death of Stephen Gately, one of the singers from the popular Irish band Boyzone.

Gately had been found dead while on holiday in Mallorca with his civil partner, Andrew Cowles. Although the coroner attributed the death to natural causes, Moir wrote that it was “not, by any yardstick, a natural one” and that “it strikes another blow to the happy-ever-after myth of civil partnerships.”

Ronson does not make the point explicit in So You’ve Been Publicly Shamed, but what immediately strikes me is that Moir was engaging in some (not-so-)good old-fashioned mainstream media shaming. She used her large public platform to hold up identified individuals to be shamed over very private behaviour. Gately could not, of course, feel any shame from beyond the grave, but Moir’s column was grossly tasteless since he had not even been buried when it first appeared.

Moir stated, self-righteously: “It is important that the truth comes out about the exact circumstances of [Gately’s] strange and lonely death.” But why was it so important that the public be told such particulars as whether or not Cowles (at least) hooked up that tragic evening for sex with a student whom Moir names, and whether or not some, or all, of the three young men involved used cannabis or other recreational drugs that night?

To confirm Moir’s propensities as a public shamer, no one need go further than the same column. She follows her small-minded paragraphs about Gately with a few others that shame “socialite” Tara Palmer-Tomkinson for no worse sin than wearing a revealing outfit to a high-society party.

You get the picture, I trust. I’m not asking that Moir, or anyone else, walk on eggshells lest her language accidentally offend somebody, or prove open to unexpectedly uncharitable interpretations. Quite the opposite: we should all be able to speak with some spontaneity, without constantly censoring how we formulate our thoughts. I’ll gladly extend that freedom to Moir.

But Moir is not merely unguarded in her language: she can be positively reckless, as with her suggestion that Palmer-Tomkinson’s wispy outfit might more appropriately be worn by “Timmy the Tranny, the hat-check personage down at the My-Oh-My supper club in Brighton.” No amount of charitable interpretation can prevent the impression that she is often deliberately, or at best uncaringly, hurtful. In those circumstances, I have no sympathy for her if she receives widespread and severe criticism for what she writes.

When it comes to something like Moir’s hatchet job on Gately and Cowles, and their relationship, I can understand the urge to retaliate – to shame and punish in return. It’s no wonder, then, that Ronson discusses the feeling of empowerment when numerous people, armed with their social media accounts, turned on badly behaved “giants” such as the Daily Mail and its contributors. As it seemed to Ronson in those days, not so long ago, “the silenced were getting a voice.”

But let’s be careful about this.

Some distinctions

A few aspects need to be teased out. Even when responding to the shamers, we ought to think about what’s appropriate.

For a start, I am – I’m well aware – being highly critical of Moir’s column and her approach to journalism. In that sense, I could be said to be “shaming” her. But we don’t have to be utterly silent when confronted by unpleasant behaviour from public figures.

My criticisms are, I submit, fair comment on material that was (deliberately and effectively) disseminated widely to the public. In writing for a large audience in the way she does – especially when she takes an aggressive and hurtful approach toward named individuals – Moir has to expect some push-back.

We can draw reasonable distinctions. I have no wish to go further than criticism of what Moir actually said and did. I don’t, for example, want to misrepresent her if I can avoid it, to make false accusations, or to punish her in any way that goes beyond criticism. I wouldn’t demand that she be no-platformed from a planned event or that advertisers withdraw their money from the Daily Mail until she is fired.

The word criticism is important. We need to think about when public criticism is fair and fitting, when it becomes disproportionate, and when it spirals down into something mean and brutal.

Furthermore, we can distinguish between 1) Moir’s behaviour toward individuals and 2) her views on issues of general importance, however wrong or ugly those views might be. In her 2009 comments on Gately’s death, the two are entangled, but it doesn’t follow that they merit just the same kind of response.

Moir’s column intrudes on individuals’ privacy and holds them up for shaming, but it also expresses an opinion on legal recognition of same-sex couples in the form of civil unions. Although she is vague, Moir seems to think that individuals involved in legally recognised same-sex relationships are less likely to be monogamous (and perhaps more likely to use drugs) than people in heterosexual marriages. This means, she seems to imply, that there’s something wrong with, or inferior about, same-sex civil unions.

In fairness, Moir later issued an apology in which she explained her view: “I was suggesting that civil partnerships – the introduction of which I am on the record in supporting – have proved just to be as problematic as marriages.” This is, however, difficult to square with the words of her original column, where she appears to deny, point blank, that civil unions “are just the same as heterosexual marriages.”

Even if she is factually correct about statistical differences between heterosexual marriages and civil unions, this at least doesn’t seem to be relevant to public policy. After all, plenty of marriages between straight people are “open” (and may or may not involve the use of recreational drugs), but they are still legally valid marriages.

If someone does think certain statistical facts about civil unions are socially relevant, however, it’s always available to them to argue why. They should be allowed to do so without their speech being legally or socially suppressed. It’s likewise open to them to produce whatever reliable data might be available. Furthermore, we can’t expect critics of civil unions to present their full case on every occasion when they speak up to express a view. That would be an excessive condition for any of us to have to meet when we express ourselves on important topics.

More generally, we can criticise bad ideas and arguments – or even make fun of them if we think they’re that bad – but as a rule we shouldn’t try to stop their expression.

Perhaps some data exists to support Moir’s rather sneering claims about civil unions. But an anecdote about the private lives of a particular gay couple proves nothing one way or the other. Once again, many heterosexual marriages are not monogamous, but a sensational story involving a particular straight couple would prove nothing about how many.

In short, Moir is entitled to express her jaundiced views about civil unions or same-sex relationships more generally, and the worst she should face is strong criticism, or a degree of satire, aimed primarily at the views themselves. But shining a spotlight on Cowles and Gately was unfair, callous, nasty, gratuitous, and (to use one of her own pet words) sleazy. In addition to criticising her apparent views, we can object strongly when she publicly shames individuals.

Surfing down the slippery slope

Ronson discusses a wide range of cases, and an evident problem is that they can vary greatly, making it difficult to draw overall conclusions or to frame exact principles.

Some individuals who’ve been publicly shamed clearly enough “started it”, but even they can suffer from a cruel and disproportionate backlash. Some have been public figures who’ve genuinely done something wrong, as with Jonah Lehrer, a journalist who fabricated quotes to make his stories appear more impressive. It’s only to be expected that Lehrer’s irresponsibility and poor ethics would damage his career. But even in his case, the shaming process was over the top. Some of it was almost sadistic.

Other victims of public shaming are more innocent than Lehrer. Prominent among them is Justine Sacco, whom Ronson views with understandable sympathy. Sacco’s career and personal life were ruined after she made an ill-advised tweet on 20 January 2013. It said: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” She was then subjected to an extraordinarily viral Twitter attack that led quickly to her losing her job and becoming an international laughing stock.

It appears that her tweet went viral after a Gawker journalist retweeted it (in a hostile way) to his 15,000 followers at the time – after just one person among Sacco’s 170 followers had passed it on to him.

Ronson offers his own interpretation of the Sacco tweet:

It seemed obvious that her tweet, whilst not a great joke, wasn’t racist, but a self-reflexive comment on white privilege – on our tendency to naively imagine ourselves immune to life’s horrors. Wasn’t it?

In truth, it’s not obvious to me just how to interpret the tweet, and of course I can’t read Sacco’s mind. If it comes to that, I doubt that she pondered the wording carefully. Still, this small piece of sick humour was aimed only at her small circle of Twitter followers, and it probably did convey to them something along the lines of what Ronson suggests. In its original context, then, it did not merely ridicule the plight of black AIDS victims in Africa.

Much satire and humour is, as we know, unstable in its meaning – simultaneously saying something outrageous and testing our emotions as we find ourselves laughing at it. It can make us squirm with uncertainty. This applies (sometimes) to high literary satire, but also to much ordinary banter among friends. We laugh but we also squirm.

In any event, charitable interpretations – if not a single straightforward one – were plainly available for Sacco’s tweet. This was a markedly different situation from Jan Moir’s gossip-column attacks on hapless celebrities and socialites. And unlike Moir, Sacco lacked a large media platform, an existing public following, and an understanding employer.

Ronson also describes the case of Lindsey Stone, a young woman whose life was turned to wreckage because of a photograph taken in Arlington National Cemetery in Virginia. In the photo she is mocking a “Silence and Respect” sign by miming a shout and making an obscene gesture. The photo was uploaded on Facebook, evidently with inadequate privacy safeguards, and eventually it went viral, with Stone being attacked by a cybermob coming from a political direction opposite to the mob that went after Sacco.

While the Arlington photograph might seem childish, or many other things, posing for it and posting it on Facebook hardly add up to any serious wrongdoing. It is not behaviour that merited the outcome for Lindsey Stone: destruction of her reputation, loss of her job, and a life of ongoing humiliation and fear.

Referring to such cases, Ronson says:

The people we were destroying were no longer just people like Jonah [Lehrer]: public figures who had committed actual transgressions. They were private individuals who really hadn’t done anything much wrong. Ordinary humans were being forced to learn damage control, like corporations that had committed PR disasters.

Thanks to Ronson’s intervention, Stone sought help from an agency that rehabilitates online reputations. Of Stone’s problems in particular, he observes:

The sad thing was that Lindsey had incurred the Internet’s wrath because she was impudent and playful and foolhardy and outspoken. And now here she was, working with Farukh [an operative for the rehabilitation agency] to reduce herself to safe banalities – to cats and ice cream and Top 40 chart music. We were creating a world where the smartest way to survive is to be bland.

This is not the culture we wanted

Ronson also quotes Michael Fertik, from the agency that helped Stone: “We’re creating a culture where people feel constantly surveilled, where people are afraid to be themselves.”

“We see ourselves as nonconformist,” Ronson concludes sadly, “but I think all of this is creating a more conformist, conservative age.”

This is not the culture we wanted. It’s a public culture that seems broken, but what can we do about it?

For a start, it helps to recognise the problem, but it’s difficult, evidently, for most people to accept the obvious advice: Be forthright in debating topics of general importance, but always subject to some charity and restraint in how you treat particular people. Think through – and not with excuses – what that means in new situations. Be willing to criticise people on your own side if they are being cruel or unfair.

It’s not our job to punish individuals, make examples of them, or suppress their views. Usually we can support our points without any of this; we can do so in ways that are kinder, more honest, more likely to make intellectual progress. The catch is, it requires patience and courage.

Our public culture needs more of this sort of patience, more of this sort of courage. Can we – will we – rise to the challenge?

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

[My page at Academia.edu]

Trump’s Enquiring Rhetoric

As this is being written, Donald Trump is the last surviving Republican presidential candidate. His final opponents, Cruz and Kasich, suspended their campaigns, though perhaps visions of a contested convention still haunt their dreams.

Cruz left the field of battle with a bizarre Trump arrow lodged in his buttocks: Trump had attacked Cruz by alleging that Ted Cruz’ father was associated with Lee Harvey Oswald. The basis for this claim was an article in the National Enquirer, a tabloid that has claimed Justice Scalia was assassinated by a hooker working for the CIA. While this tabloid has no credibility, the fact that Trump used it as a source necessitated an investigation into the claim about Cruz’ father. As should be expected, Politifact ranked it as Pants on Fire. I almost suspect that Trump is trolling the media and laughing about how he has forced them to seriously consider and thoroughly investigate claims that are utterly lacking in evidence (such as his claims about televised celebrations in America after the 9/11 attacks).

When confronted about his claim about an Oswald-Cruz connection, Trump followed his winning strategy: he refused to apologize and engaged in some Trump-Fu as his “defense.” When interviewed on ABC, his defense was as follows:  “What I was doing was referring to a picture reported and in a magazine, and I think they didn’t deny it. I don’t think anybody denied it. No, I don’t know what it was exactly, but it was a major story and a major publication, and it was picked up by many other publications. …I’m just referring to an article that appeared. I mean, it has nothing to do with me.”

This response begins with what appears to be a fallacy: he is asserting that if a claim is not denied, then it is therefore true (I am guessing the “they” is either the Cruz folks or the National Enquirer folks. This can be seen as a variation on the classic appeal to ignorance fallacy. In this fallacy, a person infers that if there is a lack of evidence against a claim, then the claim is true. However, proving a claim requires that there be adequate evidence for the claim, not just a lack of evidence against it. There is no evidence that I do not have a magical undetectable pet dragon that only I can sense. This, however, does not prove that I have such a pet.

While a failure to deny a claim might be regarded as suspicious, not denying a claim is not proof the claim is true. It might not even be known that a claim has been made (so it would not be denied). For example, Kanye West is not denying that he plans to become master of the Pan flute—but this is not proof he intends to do this. It can also be a good idea to not lend a claim psychological credence by denial—some people think that denial of a claim is evidence it is true. Naturally, Cruz did end up denying the claim.

Trump next appears to be asserting the claim is true because it was “major” and repeated. He failed to note the “major” publication is a tabloid that is lacking in credibility. As such, Trump could be seen as engaging in a fallacious appeal to authority. In this case, the National Enquirer lacks the credibility needed to serve as the basis for a non-fallacious argument from authority. Roughly put, a good argument from authority is such that the credibility of the authority provides good grounds for accepting a claim. Trump did not have a good argument from authority.

Trump also uses a fascinating technique of “own and deny.” He does this by launching an attack and then both “owning” and denying it. It is as if he punched Cruz in the face and then said, “it wasn’t me, someone else did the punching. But I will punch Cruz again. Although it wasn’t me.” I am not sure if this is a rhetorical technique or a pathological condition. However, it does allow him the best of both worlds: he can appear tough and authentic by “owning it” yet also appear to not be responsible for the attack. This seems to be quite appealing to his followers, although it is obviously logically problematic: one must either own or deny, both cannot be true.

He also makes use of an established technique:  he gets media attention drawn to a story and then uses this attention to “prove” the story is true (because it is “major” and repeated). While effective, this technique does not prove a claim is true.

Trump was also interviewed on NBC and asked why he attacked Cruz in the face of almost certain victory in Indiana.  In response, he said, “Well, because I didn’t know I had it in the grasp. …I had no idea early in the morning that was — the voting booths just starting — the voting booths were practically not even opened when I made this call. It was a call to a show. And they ran a clip of some terrible remarks made by the father about me. And all I did is refer him to these articles that appeared about his picture. And — you know, not such a bad thing.”

This does provide something of a defense for Trump. As he rightly says, he did not know he would win and he hoped that his attack would help his chances. While the fact that a practice is common does not justify it (this would be the common practice fallacy), Trump seems to be playing within the rules of negative campaigning. That said, the use of the National Enquirer as a source is a new twist as is linking an opponent to the JFK assassination. This is not to say that Trump is acting in a morally laudable manner, just that he is operating within the rules of the game. To use an analogy, while the brutal hits of football might be regarded as morally problematic, they are within the rules of the game. Likewise, such attacks are within the rules of politics.

However, Trump goes on to commit the “two wrongs make a right” fallacy: since bad things were said about Trump, he concludes that he has the right to strike back. While Trump has every right to respond to attacks, he does not have a right to respond with a completely fabricated accusation.

Trump then moves to downplaying what he did and engages in one of his signature moves: he is not really to blame (he just pointed out the articles). So, his defense is essentially “I am just punching the guy back. But, I really didn’t punch him. I just pointed out that someone else punched him. And that punching was not a bad thing.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy & My Old Husky III: Experiments & Studies

Isis on the GoWhile my husky, Isis, and I have both slowed down since we teamed up in 2004, she is doing remarkably well these days. As I often say, pulling so many years will slow down man and dog. While Isis faced a crisis, most likely due to the wear of time on her spine, the steroids seemed to have addressed the pain and inflammation so that we have resumed our usual adventures. Tail up and bright eyed is the way she is now and the way she should be.

In my previous essay I looked at using causal reasoning on a small sale by applying the methods of difference and agreement. In this essay I will look at thinking critically about experiments and studies.

The gold standard in science is the controlled cause to effect experiment. The objective of this experiment is to determine the effect of a cause. As such, the question is “I wonder what this does?” While the actual conducting of such an experiment can be complicated and difficult, the basic idea is rather simple. The first step is to have a question about a causal agent. For example, it might be wondered what effect steroids have on arthritis in elderly dogs. The second step is to determine the target population, which might already be taken care of in the first step—for example, elderly dogs would be the target population. The third step is to pull a random sample from the target population. This sample needs to be representative (that is, it needs to be like the target population and should ideally be a perfect match in miniature). For example, a sample from the population of elderly dogs would ideally include all breeds of dogs, male dogs, female dogs, and so on for all relevant qualities of dogs. The problem with a biased sample is that the inference drawn from the experiment will be weak because the sample might not be adequately like the general population. The sample also needs to be large enough—a sample that is too small will also fail to adequately support the inference drawn from the experiment.

The fourth step involves splitting the sample into the control group and the experimental group. These groups need to be as similar as possible (and can actually be made of the same individuals). The reason they need to be alike is because in the fifth step the experimenters introduce the cause (such as steroids) to the experimental group and the experiment is run to see what difference this makes between the two groups. The final step is getting the results and determining if the difference is statistically significant. This occurs when the difference between the two groups can be confidently attributed to the presence of the cause (as opposed to chance or other factors). While calculating this properly can be complicated, when assessing an experiment (such as a clinical trial) it is easy enough to compare the number of individuals in the sample to the difference between the experimental and control groups. This handy table from Critical Thinking makes this quite easy and also shows the importance of having a large enough sample.

 

Number in Experimental Group

(with similarly sized control group)

Approximate Figure That the difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
25 27
50 19
100 13
250 8
500 6
1,000 4
1,500 3

 

Many “clinical trials” mentioned in articles and blog posts have very small samples sizes and this often makes their results meaningless. This table also shows why anecdotal evidence is fallacious: a sample size of one is all but completely useless when it comes to an experiment.

The above table also assumes that the experiment is run correctly: the sample was representative, the control group was adequately matched to the experimental group, the experimenters were not biased, and so on for all the relevant factors. As such, when considering the results of an experiment it is important to consider those factors as well. If, for example, you are reading an article about an herbal supplement for arthritic dogs and it mentions a clinical trial, you would want to check on the sample size, the difference between the two groups and determine whether the experiment was also properly conducted. Without this information, you would need to rely entirely on the credibility of the source. If the source is credible and claims that the experiment was conducted properly, then it would be reasonable to trust the results. If the source’s credibility is in question, then trust should be withheld. Assessing credibility is a matter of determining expertise and the goal is to avoid being a victim of a fallacious appeal to authority. Here is a short checklist for determining whether a person (or source) is an expert or not:

 

  • The person has sufficient expertise in the subject matter in question.
  • The claim being made by the person is within her area(s) of expertise.
  • There is an adequate degree of agreement among the other experts in the subject in question.
  • The person in question is not significantly biased.
  • The area of expertise is a legitimate area or discipline.
  • The authority in question must be identified.

 

While the experiment is the gold standard, there are times when it cannot be used. In some cases, this is a matter of ethics: exposing people or animals to something potentially dangerous might be deemed morally unacceptable. In other cases, it is a matter of practicality or necessity. In such cases, studies are used.

One type of study is the non-experimental cause to effect study. This is identical to the cause to effect experiment with one rather critical difference: the experimental group is not exposed to the cause by those running the study. For example, a study might be conducted of dogs who recovered from Lyme disease to see what long term effects it has on them.

The study, as would be expected, runs in the same basic way as the experiment and if there is a statistically significant difference between the two groups (and it has been adequately conducted) then it is reasonable to make the relevant inference about the effect of the cause in question.

While useful, this sort of study is weaker than the experiment. This is because those conducting the study have to take what they get—the experimental group is already exposed to the cause and this can create problems in properly sorting out the effect of the cause in question. As such, while a properly run experiment can still get erroneous results, a properly run study is even more likely to have issues.

A second type of study is the effect to cause study. It differs from the cause to effect experiment and study in that the effect is known but the cause is not. Hence, the goal is to infer an unknown cause from the known effect. It also differs from the experiment in that those conducting the study obviously do not introduce the cause.

This study is conducted by comparing the experimental group and the control group (which are, ideally, as similar as possible) to sort out a likely cause by considering the differences between them. As would be expected, this method is far less reliable than the others since those doing the study are trying to backtrack from an effect to a cause. If considerable time has passed since the suspected cause, this can make the matter even more difficult to sort out. The conducting the study also have to work with the experimental group they happen to get and this can introduce many complications into the study, making a strong inference problematic.

An example of this would be a study of elderly dogs who suffer from paw knuckling (the paw flips over so the dog is walking on the top of the paw) to determine the cause of this effect. As one might suspect, finding the cause would be challenging—there would be a multitude of potential causes in the history of the dogs ranging from injury to disease. It is also quite likely that there are many causes in play here, and this would require sorting out the different causes for this same effect. Because of such factors, the effect to cause study is the weakest of the three and supports the lowest level of confidence in its results even when conducted properly. This explains why it can be so difficult for researchers to determine the causes of many problems that, for example, elderly dogs suffer from.

In the case of Isis, the steroids that she is taking have been well-studied, so it is quite reasonable for me to believe that they are a causal factor in her remarkable recovery. I do not, however, know for sure what caused her knuckling—there are so many potential causes for that effect. However, the important thing is that she is now walking normally about 90% of the time and her tail is back in the air, showing that she is a happy husky.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy & My Old Husky II: Difference & Agreement

Isis in the mulchAs mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a fast sprinting and long running blur of fur, she now merely saunters along. Still, lesser beasts fear her (and to a husky, all creatures are lesser beasts) and the sun is warm—so her life is still good.

Faced with the challenge of keeping her healthy and happy, I have relied a great deal on what I learned as a philosopher. As noted in the preceding essay, I learned to avoid falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tool for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since way before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is quite simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better—although more cases of something bad (like arthritis pain) would certainly be undesirable from other standpoints. The two cases can actually involve the same individual at different times—it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also looked into other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared in order to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy—to conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be purely a matter of coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect—especially since she has been eating peanut butter her whole life. It is also important to consider that an alleged cause might actually be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing the knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage. You must also keep in mind the possibility of reversed causation—that the alleged cause is actually the effect. For example, a person might think that the limping is causing the knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be very easy. For example, if a dog slips and falls, then has trouble walking, then the most likely cause is the fall (but it could still be something else—perhaps the fall and walking trouble were caused by something else). In other cases, sorting out the cause can be very difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (apparently even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert—such as a vet. Medical tests, for example, are useful for sorting out the difference and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of some soreness, I started giving her senior dog food, glucosamine and some extra protein. What followed was an improvement in her mobility and the absence of the signs of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I do consider that I could be wrong. Fortunately, I do have good evidence that the steroids Isis has been prescribed work—she made a remarkable improvement after starting the steroids and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids are the cause of her improvement—though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all the cases. In this method, the cases exhibiting the effect (such as knuckling) are considered in order to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence so as to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes—sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies—the subject of the next essay in this series.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy & My Old Husky I: Post Hoc & Anecdotal Evidence

dogpark065My Siberian husky, Isis, joined the pack in 2004 at the age of one. It took her a little while to realize that my house was now her house—she set out to chew all that could be chewed, presumably as part of some sort of imperative of destruction. Eventually, she came to realize that she was chewing her stuff—or so I like to say. More likely, joining me on 8-16 mile runs wore the chew out of her.

As the years went by, we both slowed down. Eventually, she could no longer run with me (despite my slower pace) and we went on slower adventures (one does not walk a husky; one goes adventuring with a husky). Despite her advanced age, she remained active—at least until recently. After an adventure, she seemed slow and sore. She cried once in pain, but then seemed to recover. Then she got worse, requiring a trip to the emergency veterinarian (pets seem to know the regular vet hours and seem to prefer their woes to take place on weekends).

The good news was that the x-rays showed no serious damage—just indication of wear and tear of age. She also had some unusual test results, perhaps indicating cancer. Because of her age, the main concern was with her mobility and pain—as long as she could get about and be happy, then that was what mattered. She was prescribed an assortment of medications and a follow up appointment was scheduled with the regular vet. By then, she had gotten worse in some ways—her right foot was “knuckling” over, making walking difficult. This is often a sign of nerve issues. She was prescribed steroids and had to go through a washout period before starting the new medicine. As might be imagined, neither of us got much sleep during this time.

While all stories eventually end, her story is still ongoing—the steroids seemed to have done the trick. She can go on slow adventures and enjoys basking in the sun—watching the birds and squirrels, willing the squirrels to fall from the tree and into her mouth.

While philosophy is often derided as useless, it was actually very helpful to me during this time and I decided to write about this usefulness as both a defense of philosophy and, perhaps, as something useful for others who face similar circumstances with an aging canine.

Isis’ emergency visit was focused on pain management and one drug she was prescribed was Carprofen (more infamously known by the name Rimadyl). Carprofen is an NSAID that is supposed to be safer for canines than those designed for humans (like aspirin) and is commonly used to manage arthritis in elderly dogs. Being a curious and cautious sort, I researched all the medications (having access to professional journals and a Ph.D.  is handy here). As is often the case with medications, I ran across numerous forums which included people’s sad and often angry stories about how Carprofen killed their pets. The typical story involved what one would expect: a dog was prescribed Carprofen and then died or was found to have cancer shortly thereafter. I found such stories worrisome and was concerned—I did not want my dog to be killed by her medicine. But, I also knew that without medication, she would be in terrible pain and unable to move. I wanted to make the right choice for her and knew this would require making a rational decision.

My regular vet decided to go with the steroid option, one that also has the potential for side effects—complete with the usual horror stories on the web. Once again, it was a matter of choosing between the risks of medication and the consequences of doing without. In addition to my research into the medication, I also investigated various other options for treating arthritis and pain in older dogs. She was already on glucosamine (which might be beneficial, but seems to have no serious side effects), but the web poured forth an abundance of options ranging from acupuncture to herbal remedies. I even ran across the claim that copper bracelets could help pain in dogs.

While some of the alternatives had been subject to actual scientific investigation, the majority of the discussions involved a mix of miracle and horror stories. One person might write glowingly about how an herbal product brought his dog back from death’s door while another might claim that after he gave his dog the product, the dog died because of it. Sorting through all these claims, anecdotes and studies turned out to be a fair amount of work. Fortunately, I had numerous philosophical tools that helped a great deal with such cases, specifically of the sort where it is claimed that “I gave my dog X, then he got better/died and X was the cause.” Knowing about two common fallacies is very useful in these cases.

The first is what is known as Post Hoc Ergo Propter Hoc (“after this, therefore because of this”).  This fallacy has the following form:

  1. A occurs before B.
  2. Therefore A is the cause of B.

This fallacy is committed when it is concluded that one event causes another simply because the proposed cause occurred before the proposed effect. More formally, the fallacy involves concluding that A causes or caused B because A occurs before B and there is not sufficient evidence to actually warrant such a claim.

While cause does precede effect (at least in the normal flow of time), proper causal reasoning, as will be discussed in an upcoming essay, involves sorting out whether A occurring before B is just a matter of coincidence or not. In the case of medication involving an old dog, it could entirely be a matter of coincidence that the dog died or was diagnosed with cancer after the medicine was administered. That is, the dog might have died anyway or might have already had cancer. Without a proper investigation, simply assuming that the medication was the cause would be an error. The same holds true for beneficial effects. For example, a dog might go lame after a walk and then recover after being given an herbal supplement for several days. While it would be tempting to attribute the recovery to the herbs, they might have had no effect at all. After all, lameness often goes away on its own or some other factor might have been the cause.

This is not to say that such stories should be rejected out of hand—it is to say that they should be approached with due consideration that the reasoning involved is post hoc. In concrete terms, if you are afraid to give your dog medicine she was prescribed because you heard of cases in which a dog had the medicine and then died, you should investigate more (such as talking to your vet) about whether there really is a risk of death. As another example, if someone praises an herbal supplement because her dog perked up after taking it, then you should see if there is evidence for this claim beyond the post hoc situation.

Fortunately, there has been considerable research into medications and treatments that provide a basis for making a rational choice. When considering such data, it is important not to be lured into rejecting data by the seductive power of the Fallacy of Anecdotal Evidence.

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation on hasty generalization.  It has the following forms:

Form One

  1. Anecdote A is told about a member (or small number of members) of Population P.
  2. Conclusion C is drawn about Population P based on Anecdote A.

For example, a person might hear anecdotes about dogs that died after taking a prescribed medication and infer that the medicine is likely to kill dogs.

Form Two

  1. Reasonable statistical evidence S exists for general claim C.
  2. Anecdote A is presented that is an exception to or goes against general claim C.
  3. Conclusion: General claim C is rejected.

For example, the statistical evidence shows that the claim that glucosamine-chondroitin can treat arthritis is, at best, very weakly supported. But, a person might tell a story about how their aging husky “was like a new dog” after she starting getting a daily dose of the supplement. To accept this as proof that the data is wrong would be to fall for this fallacy. That said, I do give my dog glucosamine-chondroitin because it is cheap, has no serious side effects and might have some benefit. I am fully aware of the data and do not reject it—I am gambling that it might do my husky some good.

The way to avoid becoming a victim of anecdotal evidence is to seek reliable, objective statistical data about the matter in question (a vet should be a good source). This can, I hasten to say, can be quite a challenge when it comes to treatments for pets. In many cases, there are no adequate studies or trials that provide statistical data and all the information available is in the form of anecdotes. One option is, of course, to investigate the anecdotes and try to do your own statistics. So, if the majority of anecdotes indicate something harmful (or something beneficial) then this would be weak evidence for the claim. In any case, it is wise to approach anecdotes with due care—a story is not proof.

Threat Assessment I: A Vivid Spotlight

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is, very broadly speaking, the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both of these factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is a very low severity threat. There is an exceedingly low probability that there will be a mass shooting, but it is a high severity threat since it can result in injury or death.

While humans have done a fairly good job at surviving, this seems to have been despite our amazingly bad skills at rational threat assessment. To be specific, the worry people feel in regards to a threat generally does not match up with the actual probability of the threat occurring. People do seem somewhat better at assessing the severity, though they are also often in error about this.

One excellent example of poor threat assessment is in regards to the fear Americans have in regards to domestic terrorism. As of December 15, 2015 there have been 45 people killed in the United States in attacks classified as “violent jihadist attacks” and 48 people killed in attacks classified as “far right wing attacks” since 9/11/2001.  In contrast, there were 301,797 gun deaths from 2005-2015 in the United States and over 30,000 people are killed each year in motor vehicle crashes in the United States.

Despite the incredibly low likelihood of a person being killed by an act of terrorism in the United States, many people are terrified by terrorism (which is, of course, the goal of terrorism) and have become rather focused on the matter since the murders in San Bernardino. Although there have been no acts of terrorism on the part of refugees in the United States, many people are terrified of refugees and this had led to calls for refusing to accept Syrian refugees and Donald Trump has famously called for a ban on all Muslims entering the United States.

Given that an American is vastly more likely to be killed while driving than killed by a terrorist, it might be wondered why people are so incredibly bad at this sort of threat assessment. The answer, in regards to having fear vastly out of proportion to the probability is easy enough—it involves a cognitive bias and some classic fallacies.

People follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use proper statistical methods, people generally fall victim to the bias known as the availability heuristic. The idea is that a person unconsciously assigns a probability to something based on how often they think of that sort of event. While an event that occurs often will tend to be thought of often, the fact that something is often thought of does not make it more likely to occur.

After an incident of domestic terrorism, people think about terrorism far more often and thus tend to unconsciously believe that the chance of terrorism occurring is far higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is incredibly low (driving to the beach is vastly more likely to kill you than a shark is). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, very difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite it is usually the worst evidence.

People are also misled about probability by various fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain class or type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention is focused on that fact, leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most Christians are terrorists because the media covered a terrorist who was Christian (who shot up a Planned Parenthood). If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a domestic terrorist attack by Muslims.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. This fallacy is similar to hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.

People often fall victim to this fallacy because stories and anecdotes tend to have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of said anecdote. Not surprisingly, people most commonly accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of domestic terrorism or tell the story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring. For example, people point to the claim that one of the terrorists in Paris masqueraded as a refugee and infer that refugees pose a great threat to the United States. Or they tell the story about the one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the visa system (which is used by about 25,000 people a year).

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh a significant amount of statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is particularly vivid or dramatic does not make the event more likely to occur, especially in the face of significant statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases tend to make a very strong impression on the human mind. For example, mass shootings by domestic terrorists are vivid and awful, so it is hardly surprising that people feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more a person feels that it will occur.

It should be kept in mind that taking into account the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because the effects of an accident can be very, very dramatic. If he knows that, statistically, the chances of the accident are happening are very low but he considers even a small risk to be unacceptable, then he would not be making this error in reasoning. This then becomes a matter of value judgment—how much risk is a person willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is rather important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter a miniscule threat while spending little on leading causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating fear that is unfounded. In addition to the psychological harm to individuals, there is also the damage to the social fabric. There has already been an increase in attacks on Muslims in America and people are seriously considering abandoning core American values, such as the freedom of religion and being good Samaritans.

In light of the above, I urge people to think rather than feel their way through their concerns about terrorism. Also, I urge people to stop listening to Donald Trump. He has the right of free expression, but people also have the right of free listening.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Doubling Down

A diagram of cognitive dissonance theory. Diss...

(Photo credit: Wikipedia)

One interesting phenomenon is the tendency of people to double down on beliefs. For those not familiar with doubling down, this occurs when a person is confronted with evidence against a beloved belief and her belief, far from being weakened by the evidence, is strengthened.

One rather plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. Roughly put, when a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the logical inference that the belief is not true, then the rational thing to do is reject the old belief. If the evidence is not plausible or does not strongly support the logical inference that the belief is untrue, then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

As might be suspected, the assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information about the matter, and credible sources. This assessment can merely push the matter back: the evidence for the evidence will also need to be assessed, which serves to fuel some classic skeptical arguments about the impossibility of knowledge. The idea is that every belief must be assessed and this would lead to an infinite regress, thus making knowing whether a belief is true or not impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some evidence for the belief—even if the evidence is faith or some sort of revelation.

In terms of assessing the reasoning, the matter is entirely objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs, thus the reasoning is objectively good or bad. Inductive reasoning is a different matter. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Unlike deductive arguments, inductive arguments vary greatly in strength and while there are standards of assessment, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without undergoing the process of honestly assessing the evidence and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or just feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can simply reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies are commonly employed in this task).

This rejection costs less psychologically than engaging the evidence and reasoning, but is often not free. Since the person probably has some awareness of the self-deception, it needs to be psychologically “justified” and this seems to result in the person strengthening her commitment to the belief. People seem to have all sorts of interesting cognitive biases that help out here, such as confirmation bias and other forms of motivated reasoning. These can be rather hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” their beliefs is by regarding the evidence and opposing argument as an unjust attack, which strengthens her resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer that they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. This is rather problematic reasoning—as shown by the fact that people do not infer that they are in error just because people are supporting them.

People also, as John Locke argued in his work on enthusiasm, consider how strongly they feel about a belief as evidence for its truth. When people are challenged, they typically feel angry and this strong emotion makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with one of doubling down. This accusation, after all, comes with the insinuation that the person is in error and is thus irrationally holding to a false belief. The reasonable defense is to show that evidence and arguments are being used in support of the belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with logic. My opponents double down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Trump & Truth

As this is being written at the end of November, Donald Trump is still the leading Republican presidential candidate. While some might take the view that this is in spite of the outrageous and terrible things Trump says, a better explanation is that he is doing well because of this behavior. Some regard it as evidence of his authenticity and find it appealing in the face of so many slick and seemingly inauthentic politicians (Hillary Clinton and Jeb Bush are regarded by some as examples of this). Some agree with what Trump says and thus find this behavior very appealing.

Trump was once again in the media spotlight for an outrageous claim. This time, he made a claim about something he believed happened on 9/11: “Hey, I watched when the World Trade Center came tumbling down. And I watched in Jersey City, New Jersey, where thousands and thousands of people were cheering as that building was coming down. Thousands of people were cheering.”

Trump was immediately called on this claim on the grounds that it is completely untrue. While it would be as reasonable to dismiss Trump’s false claim as it would be to dismiss any other claim that is quite obviously untrue, the Washington Post and Politifact undertook a detailed investigation. On the one hand, it seems needless to dignify such a falsehood with investigation. On the other hand, since Trump is the leading Republican candidate, his claims could be regarded as meriting the courtesy of a fact check rather than simple dismissal as being patently ludicrous.  As should be expected, while they did find some urban myths and rumors, they found absolutely no evidence supporting Trump’s claim.

Rather impressively, Trump decided to double-down on his claim rather than acknowledging that his claim is manifestly false. His confidence has also caused some of his supporters to accept his claim, typically with vague references about having some memory of something that would support Trump’s claim. This is consistent with the way ideologically motivated “reasoning” works: when confronted with evidence against a claim that is part of one’s ideologically identity, the strength of the belief becomes even stronger. This holds true across the political spectrum and into other areas as well. For example, people who strongly identify with the anti-vaccination movement not only dismiss the overwhelming scientific evidence against their views, they often double-down on their beliefs and some even take criticism as more proof that they are right.

This tendency does make psychological sense—when part of a person’s identity is at risk, it is natural to engage in a form of wishful thinking and accept (or reject) a claim because one really wants the claim to be true (or false). However, wishful thinking is fallacious thinking—wanting a claim to be true does not make it true. As such, this tendency is a defect in a person’s rationality and giving in to it will generally lead to poor decision making.

There is also the fact that since at least the time of Nixon a narrative about liberal media bias has been constructed and implanted into the minds of many. This provides an all-purpose defense against almost any negative claims made by the media about conservatives. Because of this, Trump’s defenders can allege that the media covered up the story (which would, of course, contradict his claim that he saw all those people in another city celebrating 9/11) or that they are now engaged in a conspiracy against Trump.

A rather obvious problem with the claim that the media is engaged in some sort of conspiracy is that if Trump saw those thousands celebrating in New Jersey, then there should be no shortage of witnesses and video evidence. However, there are no witnesses and no such video evidence. This is because Trump’s claim is not true.

While it would be easy to claim that Trump is simply lying, this might not be the case. As discussed in an earlier essay I wrote about presidential candidate Ben Carson’s untrue claims, a claim being false is not sufficient to make it a lie. For example, a person might say that he has $20 in his pocket but be wrong because a pickpocket stole it a few minutes ago. Her claim would be untrue, but it would be a mistake to accuse her of being a liar. While this oversimplifies things quite a bit, for Trump to be lying about this he would need to believe that what he is saying is not true and be engaged in the right (or rather) wrong sort of intent. The matter of intent is important for obvious reasons, such as distinguishing fiction writers from liars. If Trump believes what he is saying, then he would not be lying.

While it might seem inconceivable that Trump really believes such an obvious untruth, it could very well be the case. Memory, as has been well-established, is notoriously unreliable. People forget things and fill in the missing pieces with bits of fiction they think are facts. This happens to all of us because of our imperfect memories and a need for a coherent narrative. There is also the fact that people can convince themselves that something is true—often by using on themselves various rhetorical techniques. One common way this is done is by reputation—the more often people hear a claim repeated, the more likely it is that they will accept it as true, even when there is no evidence for the claim. This is why the use of repeated talking points is such a popular strategy among politicians, pundits and purveyors. Trump might have told himself his story so many times that he now sincerely believes it and once it is cemented in his mind, it will be all but impossible for any evidence or criticism to dislodge his narrative. If this is the case, in his mind there was such massive celebrations and he probably can even “remember” the images and sounds—such is the power of the mind.

Trump could, of course, be well aware that he is saying something untrue but has decided to stick with his claim. This would make considerable sense—while people are supposed to pay a price for being completely wrong and an even higher price for lying, Trump has been rewarded with more coverage and more support with each new outrageous thing he does or says. Because of this success, Trump has excellent reasons to continue doing what he has been doing. It might take him all the way to the White House.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Left’s Defection from Progress

Note: This is a slightly abridged (but otherwise largely warts and all) version of an article that I had published in Quadrant magazine in April 1999. It has not previously been published online (except that I am cross-posting on my own blog, Metamagician and the Hellfire Club). While my views have developed somewhat in the interim, there may be some advantage in republishing it for a new audience, especially at a time when there is much discussion of a “regressive left”.

I.

In a recent mini-review of David Stove’s Anything Goes: Origins of Scientific Irrationalism (originally published in 1982 as Popper and After), Diane Carlyle and Nick Walker make a casual reference to Stove’s “reactionary polemic”. By contrast, they refer to the philosophies of science that Stove attacks as “progressive notions of culture-based scientific knowledge”. To say the least, this appears tendentious.

To be fair, Carlyle and Walker end up saying some favourable things about Stove’s book. What is nonetheless alarming about their review is that it evidences just how easy it has become to write as if scientific realism were inherently “reactionary” and the more or less relativist views of scientific knowledge that predominate among social scientists and humanities scholars were “progressive”.

The words “reactionary” and “progressive” usually attach themselves to political and social movements, some kind of traditionalist or conservative backlash versus an attempt to advance political liberties or social equality. Perhaps Carlyle and Walker had another sense in mind, but the connotations of their words are pretty inescapable. Moreover, they would know as well as I do that there is now a prevalent equation within the social sciences and humanities of relativist conceptions of truth and reality with left-wing social critique, and of scientific realism with the political right. Carlyle and Walker wrote their piece against that background. But where does it leave those of us who retain at least a temperamental attachment to the left, however nebulous that concept is becoming, while remaining committed to scientific realism? To adapt a phrase from Christina Hoff Sommers, we are entitled to ask about who has been stealing socially liberal thought in general.

Is the life of reason and liberty (intellectual and otherwise) that some people currently enjoy in some countries no more than an historical anomaly, a short-lived bubble that will soon burst? It may well appear so. Observe the dreadful credulity of the general public in relation to mysticism, magic and pseudoscience, and the same public’s preponderant ignorance of genuine science. Factor in the lowbrow popularity of religious fundamentalism and the anti-scientific rantings of highbrow conservatives such as Bryan Appleyard. Yet the sharpest goad to despair is the appearance that what passes for the intellectual and artistic left has now repudiated the Enlightenment project of conjoined scientific and social progress.

Many theorists in the social sciences and humanities appear obsessed with dismantling the entirety of post-Enlightenment political, philosophical and scientific thought. This is imagined to be a progressive act, desirable to promote the various social, cultural and other causes that have become politically urgent in recent decades, particularly those associated with sex, race, and the aftermath of colonialism. The positions on these latter issues taken by university-based theorists give them a claim to belong to, if not actually constitute, the “academic left”, and I’ll refer to them with this shorthand expression.

There is, however, nothing inherently left-wing about wishing to sweep away our Enlightenment legacy. Nor is a commitment to scientific inquiry and hard philosophical analysis inconsistent with socially liberal views. Abandonment of the project of rational inquiry, with its cross-checking of knowledge in different fields, merely opens the door to the worst kind of politics that the historical left could imagine, for the alternative is that “truth” be determined by whoever, at particular times and places, possesses sufficient political or rhetorical power to decide what beliefs are orthodox. The rationality of our society is at stake, but so is the fate of the left itself, if it is so foolish as to abandon the standards of reason for something more like a brute contest for power.

It is difficult to know where to start in criticising the academic left’s contribution to our society’s anti-rationalist drift. The approaches I am gesturing towards are diverse among themselves, as well as being professed in the universities side by side with more traditional methods of analysing society and culture. There is considerable useful dialogue among all these approaches, and it can be difficult obtaining an accurate idea of specific influences within the general intellectual milieu.

However, amidst all the intellectual currents and cross-currents, it is possible to find something of a common element in the thesis or assumption (sometimes one, sometimes the other) that reality, or our knowledge of it, is “socially constructed”. There are many things this might mean, and I explain below why I do not quarrel with them all.

In the extreme, however, our conceptions of reality, truth and knowledge are relativised, leading to absurd doctrines, such as the repudiation of deductive logic or the denial of a mind-independent world. Symptomatic of the approach I am condemning is a subordination of the intellectual quest for knowledge and understanding to political and social advocacy. Some writers are prepared to misrepresent mathematical and scientific findings for the purposes of point scoring or intellectual play, or the simple pleasure of ego-strutting. All this is antithetical to Enlightenment values, but so much – it might be said – for the Enlightenment.

II.

The notion that reality is socially constructed would be attractive and defensible if it were restricted to a thesis about the considerable historical contingency of any culture’s social practices and mores, and its systems of belief, understanding and evaluation. These are, indeed, shaped partly by the way they co-evolve and “fit” with each other, and by the culture’s underlying economic and other material circumstances.

The body of beliefs available to anyone will be constrained by the circumstances of her culture, including its attitude to free inquiry, the concepts it has already built up for understanding the world, and its available technologies for the gathering of data. Though Stove is surely correct to emphasise that the accumulation of empirical knowledge since the 17th century has been genuine, the directions taken by science have been influenced by pre-existing values and beliefs. Meanwhile, social practices, metaphysical and ethical (rather than empirical) beliefs, the methods by which society is organised and by which human beings understand their experience are none of them determined in any simple, direct or uniform way by human “nature” or biology, or by transcendental events.

So far, so good – but none of this is to suggest that all of these categories should or can be treated in exactly the same way. Take the domain of metaphysical questions. Philosophers working in metaphysics are concerned to understand such fundamentals as space, time, causation, the kinds of substances that ultimately exist, the nature of consciousness and the self. The answers cannot simply be “read off” our access to empirical data or our most fundamental scientific theories, or some body of transcendental knowledge. Nonetheless, I am content to assume that all these questions, however intractable we find them, have correct answers.

The case of ethical disagreement may be very different, and I discuss it in more detail below. It may be that widespread and deep ethical disagreement actually evidences the correctness of a particular metaphysical (and meta-ethical) theory – that there are no objectively existing properties of moral good and evil. Yet, to the extent that they depend upon empirical beliefs about the consequences of human conduct, practical moral judgements may often be reconcilable. Your attitude to the rights of homosexuals will differ from mine if yours is based on a belief that homosexual acts cause earthquakes.

Again, the social practices of historical societies may turn out to be constrained by our biology in a way that is not true of the ultimate answers to questions of metaphysics. All these are areas where human behaviour and belief may be shaped by material circumstances and the way they fit with each other, and relatively unconstrained by empirical knowledge. But, to repeat, they are not all the same.

Where this appears to lead us is that, for complicated reasons and in awkward ways, there is much about the practices and beliefs of different cultures that is contingent on history. In particular, the way institutions are built up around experience is more or less historically contingent, dependent largely upon economic and environmental circumstances and on earlier or co-evolving layers of political and social structures. Much of our activity as human beings in the realms of understanding, organising, valuing and responding to experience can reasonably be described as “socially constructed”, and it will often make perfectly good sense to refer to social practices, categories, concepts and beliefs as “social constructions”.

Yet this modest insight cries out for clear intellectual distinctions and detailed application to particular situations, with conscientious linkages to empirical data. It cannot provide a short-cut to moral perspicuity or sound policy formulation. Nor is it inconsistent with a belief in the actual existence of law-governed events in the empirical world, which can be the subject of objective scientific theory and accumulating knowledge.

III.

As Antony Flew once expressed it, what is socially constructed is not reality itself but merely “reality”: the beliefs, meanings and values available within a culture.

Thus, none of what I’ve described so far amounts to “social constructionism” in a pure or philosophical sense, since this would require, in effect, that we never have any knowledge. It would require a thesis that all beliefs are so deeply permeated by socially specific ideas that they never transcend their social conditions of production to the extent of being about objective reality. To take this a step further, even the truth about physical nature would be relative to social institutions – relativism applies all the way down.

Two important points need to be made here. First, even without such a strong concept of socially constructed knowledge, social scientists and humanities scholars have considerable room to pursue research programs aimed at exploring the historically contingent nature of social institutions. In the next section, I argue that this applies quintessentially to socially accepted moral beliefs.

Second, however, there is a question as to why anyone would insist upon the thesis that the nature of reality is somehow relative to social beliefs all the way down, that there is no point at which we ever hit a bedrock of truth and falsity about anything. It is notorious that intellectuals who use such language sometimes retreat, when challenged, to a far more modest or equivocal kind of position.

Certainly, there is no need for anyone’s political or social aims to lead them to deny the mind-independent existence of physical nature, or to suggest that the truth about it is, in an ultimate sense, relative to social beliefs or subjective to particular observers. Nonetheless, many left-wing intellectuals freely express a view in which reality, not “reality”, is a mere social construction.

IV.

If social construction theory is to have any significant practical bite, then it has to assert that moral beliefs are part of what is socially constructed. I wish to explore this issue through some more fundamental considerations about ethics.

It is well-documented that there are dramatic contrasts between different societies’ practical beliefs about what is right and wrong, so much so that the philosopher J.L. Mackie said that these “make it difficult to treat those judgements as apprehensions of objective truths.” As Mackie develops the argument, it is not part of some general theory that “the truth is relative”, but involves a careful attempt to show that the diversity of moral beliefs is not analogous to the usual disagreements about the nature of the physical world.

Along with other arguments put by philosophers in Hume’s radical empiricist tradition, Mackie’s appeal to cultural diversity may persuade us that there are no objective moral truths. Indeed, it seems to me that there are only two positions here that are intellectually viable. The first is that Mackie is simply correct. This idea might seem to lead to cultural relativism about morality, but things are not always what they seem.

The second viable position is that there are objective moral truths, but they take the form of principles of an extremely broad nature, broad enough to help shape – rather than being shaped by – a diverse range of social practices in different environmental, economic and other circumstances.

If this is so, particular social practices and practical moral beliefs have some ultimate relationship to fundamental moral principles, but there can be enormous “slippage” between the two, depending on the range of circumstances confronting different human societies. Moreover, during times of rapid change such as industrialised societies have experienced in the last three centuries – and especially the last several decades – social practices and practical moral beliefs might tend to be frozen in place, even though they have become untenable. Conversely, there might be more wisdom, or at least rationality, than is apparent to most Westerners in the practices and moral beliefs of traditional societies. All societies, however, might have practical moral beliefs that are incorrect because of lack of empirical knowledge about the consequences of human conduct.

Taken with my earlier, more general, comments about various aspects of social practices and culturally-accepted “reality”, this approach gives socially liberal thinkers much of what they want. It tends to justify those who would test and criticise the practices and moral beliefs of Western nations while defending the rationality and sophistication of people from colonised cultures.

V.

The academic left’s current hostility to science and the Enlightenment project may have its origins in a general feeling, brought on by the twentieth century’s racial and ideological atrocities, that the Enlightenment has failed. Many intellectuals have come to see science as complicit in terror, oppression and mass killing, rather than as an inspiration for social progress.

The left’s hostility has surely been intensified by a quite specific fear that the reductive study of human biology will cross a bridge from the empirical into the normative realm, where it may start to dictate the political and social agenda in ways that can aptly be described as reactionary. This, at least, is the inference I draw from left-wing intellectuals’ evident detestation of human sociobiology or evolutionary psychology.

The fear may be that dubious research in areas such as evolutionary psychology and/or cognitive neuroscience will be used to rationalise sexist, racist or other illiberal positions. More radically, it may be feared that genuine knowledge of a politically unpalatable or otherwise harmful kind will emerge from these areas. Are such fears justified? To dismiss them lightly would be irresponsible and naive. I can do no more than place them in perspective. The relationship between the social sciences and humanities, on the one hand, and the “hard” end of psychological research, on the other, is one of the most important issues to be tackled by intellectuals in all fields – the physical sciences, social sciences and humanities.

One important biological lesson we have learned is that human beings are not, in any reputable sense, divided into “races”. As an empirical fact of evolutionary history and genetic comparison, we are all so alike that superficial characteristics such as skin or hair colour signify nothing about our moral or intellectual worth, or about the character of our inner experience. Yet, what if it had turned out otherwise? It is understandable if people are frightened by our ability to research such issues. At the same time, the alternative is to suppress rational inquiry in some areas, leaving questions of orthodoxy to however wins the naked contest for power. This is neither rational nor safe.

What implications could scientific knowledge about ourselves have for moral conduct or social policy? No number of factual statements about human nature, by themselves, can ever entail statements that amount to moral knowledge, as Hume demonstrated. What is required is an ethical theory, persuasive on other grounds, that already links “is” and “ought”. This might be found, for example, in a definition of moral action in terms of human flourishing, though it is not clear why we should, as individuals, be concerned about something as abstract as that – why not merely the flourishing of ourselves or our particular loved ones?.

One comfort is that, even if we had a plausible set of empirical and meta-ethical gadgets to connect what we know of human nature to high-level questions about social policy, we would discover significant slippage between levels. Nature does not contradict itself, and no findings from a field such as evolutionary psychology could be inconsistent with the observed facts of cultural diversity. If reductive explanations of human nature became available in more detail, these must turn out to be compatible with the existence of the vast spectrum of viable cultures that human beings have created so far. And there is no reason to believe that a lesser variety of cultures will be workable in the material circumstances of a high-technology future.

The dark side of evolutionary psychology includes, among other things, some scary-looking claims about the reproductive and sociopolitical behaviour of the respective sexes. True, no one seriously asserts that sexual conduct in human societies and the respective roles of men and women within families and extra-familial hierarchies are specified by our genes in a direct or detailed fashion. What, however, are we to make of the controversial analyses of male and female reproductive “strategies” that have been popularised by several writers in the 1990s? Perhaps the best-known exposition is that of Matt Ridley in The Red Queen: Sex and the Evolution of Human Nature (1993). Such accounts offer evidence and argument that men are genetically hardwired to be highly polygamous or promiscuous, while women are similarly programmed to be imperfectly monogamous, as well as sexually deceitful.

In responding to this, first, I am in favour of scrutinising the evidence for such claims very carefully, since they can so readily be adapted to support worn-out stereotypes about the roles of the sexes. That, however, is a reason to show scientific and philosophical rigour, not to accept strong social constructionism about science. Secondly, even if findings similar to those synthesised by Ridley turned out to be correct, the social consequences are by no means apparent. Mere biological facts cannot tell us in some absolute way what are the correct sexual mores for a human society.

To take this a step further, theories about reproductive strategies suggest that there are in-built conflicts between the interests of men and women, and of higher and lower status men, which will inevitably need to be moderated by social compromise, not necessarily in the same way by different cultures. If all this were accepted for the sake of argument, it might destroy a precious notion about ourselves: that there is a simple way for relations between the sexes to be harmonious. On the other hand, it would seem to support rather than refute what might be considered a “progressive” notion: that no one society, certainly not our own, has the absolutely final answer to questions about sexual morality.

Although evolutionary psychology and cognitive neuroscience are potential minefields, it is irrational to pretend that they are incapable of discovering objective knowledge. Fortunately, such knowledge will surely include insight into the slippage between our genetic similarity and the diversity of forms taken by viable cultures. The commonality of human nature will be at a level that is consistent with the (substantial) historical contingency of social practices and of many areas of understanding and evaluative belief. The effect on social policy is likely to be limited, though we may become more charitable about what moral requirements are reasonable for the kinds of creatures that we are.

I should add that evolutionary psychology and cognitive neuroscience are not about to put the humanities, in particular, out of business. There are good reasons why the natural sciences cannot provide a substitute for humanistic explanation, even if we obtain a far deeper understanding of our own genetic and neurophysiological make-up. This is partly because reductive science is ill-equipped to deal with the particularity of complex events, partly because causal explanation may not be all that we want, anyway, when we try to interpret and clarify human experience.

VI.

Either there are no objective moral truths or they are of an extremely general kind. Should we, therefore, become cultural relativists?

Over a quarter of a century ago, Bernard Williams made the sharp comment that cultural relativism is “possibly the most absurd view to have been advanced even in moral philosophy”” To get this clear, Williams was criticising a cluster of beliefs that has a great attraction for left-wing academics and many others who preach inter-cultural tolerance: first, that what is “right” means what is right for a particular culture; second, that what is right for a particular culture refers to what is functionally valuable for it; and third, that it is “therefore” wrong for one culture to interfere with the organisation or values of another.

As Williams pointed out, these propositions are internally inconsistent. Not only does the third not follow from the others; it cannot be asserted while the other two are maintained. After all, it may be functionally valuable to culture A (and hence “right” within that culture) for it to develop institutions for imposing its will on culture B. These may include armadas and armies, colonising expeditions, institutionalised intolerance, and aggressively proselytising religions. In fact, nothing positive in the way of moral beliefs, political programs or social policy can ever be derived merely from a theory of cultural relativism.

That does not mean that there are no implications at all from the insight that social practices and beliefs are, to a large degree, contingent on history and circumstance. Depending upon how we elaborate this insight, we may have good reason to suspect that another culture’s odd-looking ways of doing things are more justifiable against universal principles of moral value than is readily apparent. In that case, we may also take the view that the details of how our own society, or an element of it, goes about things are open to challenge as to how far they are (or remain?) justifiable against such universal principles.

If, on the other hand, we simply reject the existence of any objective moral truths – which I have stated to be a philosophically viable position – we will have a more difficult time explaining why we are active in pursuing social change. Certainly, we will not be able to appeal to objectively applicable principles to justify our activity. All the same, we may be able to make positive commitments to ideas such as freedom, equality or benevolence that we find less arbitrary and more psychologically satisfying than mere acquiescence in “the way they do things around here”. In no case, however, can we intellectually justify a course of political and social activism without more general principles or commitments to supplement the bare insight that, in various complicated ways, social beliefs and practices are largely contingent.

VII.

An example of an attempt to short-circuit the kind of hard thinking about moral foundations required to deal with contentious issues is Martin F. Katz’s well-known article, “After the Deconstruction: Law in the Age of Post-Structuralism”. Katz is a jurisprudential theorist who is committed to a quite extreme form of relativism about empirical knowledge. In particular, his article explicitly assigns the findings of physical science the same status as the critical interpretations of literary works.

Towards the end of “After the Deconstruction”, Katz uses the abortion debate as an example of how what he calls “deconstructionism” or the “deconstructionist analysis” can clarify and arbitrate social conflict. He begins by stating the debate much as it might be seen by its antagonists:

One side of the debate holds that abortion is wrong because it involves the murder of an unborn baby. The other side of the debate sees abortion as an issue of self-determination; the woman’s right to choose what she does to her body. How do we measure which of these “rights” should take priority?

In order to avoid any sense of evasion, I’ll state clearly that the second of these positions, the “pro-choice” position, is closer to my own. However, either position has more going for it in terms of rationality than what Katz actually advocates.

This, however, is not how Katz proposes to solve the problem of abortion. He begins by stating that “deconstructionism” recommends that we “resist the temptation to weigh the legitimacy of . . . these competing claims.” Instead, we should consider the different “subjugations” supposedly instigated by the pro-life and pro-choice positions. The pro-life position is condemned because it denies women the choice of what role they wish to take in society, while the pro-choice position is apparently praised (though even this is not entirely clear) for shifting the decision about whether and when to have children directly to women.

The trouble with this is that it prematurely forecloses on the metaphysical and ethical positions at stake, leaving everything to be solved in terms of power relations. However, if we believe that a foetus (say at a particular age) is a person in some sense that entails moral regard, or a being that possesses a human soul, then there are moral consequences. Such beliefs, together with some plausible assumptions about our moral principles or commitments, entail that we should accept that aborting the foetus is an immoral act. The fact that banning the abortion may reduce the political power of the woman concerned, or of women generally, over against that of men will seem to have little moral bite, unless we adopt a very deep principle of group political equality. That would require ethical argument of an intensity which Katz never attempts.

If we take it that the foetus is not a person in the relevant sense, we may be far more ready to solve the problem (and to advocate an assignment of “rights”) on the basis of utilitarian, or even libertarian, principles. By contrast, the style of “deconstructionist” thought advocated by Katz threatens to push rational analysis aside altogether, relying on untheorised hunches or feelings about how we wish power to be distributed in our society. This approach can justifiably be condemned as irrational. At the same time, the statements that Katz makes about the political consequences for men or women of banning or legalising abortion are so trite that it is difficult to imagine how anyone not already beguiled by an ideology could think that merely stating them could solve the problem.

VIII.

In the example of Katz’s article, as in the general argument I have put, the insight that much in our own society’s practices and moral beliefs is “socially constructed” can do only a modest amount of intellectual work. We may have good reason to question the way they do things around here, to subject it to deeper analysis. We may also have good reason to believe that the “odd” ways they do things in other cultures make more sense than is immediately apparent to the culture-bound Western mind. All very well. None of this, however, can undermine the results of systematic empirical inquiry. Nor can it save us from the effort of grappling with inescapable metaphysical and ethical questions, just as we had to do before the deconstruction.

[My Amazon author page]