Tag Archives: Jerry Coyne

Against accommodationism: How science undermines religion

Faith versus Fact
There is currently a fashion for religion/science accommodationism, the idea that there’s room for religious faith within a scientifically informed understanding of the world.

Accommodationism of this kind gains endorsement even from official science organizations such as, in the United States, the National Academy of Sciences and the American Association for the Advancement of Science. But how well does it withstand scrutiny?

Not too well, according to a new book by distinguished biologist Jerry A. Coyne.

Gould’s magisteria

The most famous, or notorious, rationale for accommodationism was provided by the celebrity palaeontologist Stephen Jay Gould in his 1999 book Rocks of Ages. Gould argues that religion and science possess separate and non-overlapping “magisteria”, or domains of teaching authority, and so they can never come into conflict unless one or the other oversteps its domain’s boundaries.

If we accept the principle of Non-Overlapping Magisteria (NOMA), the magisterium of science relates to “the factual construction of nature”. By contrast, religion has teaching authority in respect of “ultimate meaning and moral value” or “moral issues about the value and meaning of life”.

On this account, religion and science do not overlap, and religion is invulnerable to scientific criticism. Importantly, however, this is because Gould is ruling out many religious claims as being illegitimate from the outset even as religious doctrine. Thus, he does not attack the fundamentalist Christian belief in a young earth merely on the basis that it is incorrect in the light of established scientific knowledge (although it clearly is!). He claims, though with little real argument, that it is illegitimate in principle to hold religious beliefs about matters of empirical fact concerning the space-time world: these simply fall outside the teaching authority of religion.

I hope it’s clear that Gould’s manifesto makes an extraordinarily strong claim about religion’s limited role. Certainly, most actual religions have implicitly disagreed.

The category of “religion” has been defined and explained in numerous ways by philosophers, anthropologists, sociologists, and others with an academic or practical interest. There is much controversy and disagreement. All the same, we can observe that religions have typically been somewhat encyclopedic, or comprehensive, explanatory systems.

Religions usually come complete with ritual observances and standards of conduct, but they are more than mere systems of ritual and morality. They typically make sense of human experience in terms of a transcendent dimension to human life and well-being. Religions relate these to supernatural beings, forces, and the like. But religions also make claims about humanity’s place – usually a strikingly exceptional and significant one – in the space-time universe.

It would be naïve or even dishonest to imagine that this somehow lies outside of religion’s historical role. While Gould wants to avoid conflict, he creates a new source for it, since the principle of NOMA is itself contrary to the teachings of most historical religions. At any rate, leaving aside any other, or more detailed, criticisms of the NOMA principle, there is ample opportunity for religion(s) to overlap with science and come into conflict with it.

Coyne on religion and science

The genuine conflict between religion and science is the theme of Jerry Coyne’s Faith versus Fact: Why Science and Religion are Incompatible (Viking, 2015). This book’s appearance was long anticipated; it’s a publishing event that prompts reflection.

In pushing back against accommodationism, Coyne portrays religion and science as “engaged in a kind of war: a war for understanding, a war about whether we should have good reasons for what we accept as true.” Note, however, that he is concerned with theistic religions that include a personal God who is involved in history. (He is not, for example, dealing with Confucianism, pantheism or austere forms of philosophical deism that postulate a distant, non-interfering God.)

Accommodationism is fashionable, but that has less to do with its intellectual merits than with widespread solicitude toward religion. There are, furthermore, reasons why scientists in the USA (in particular) find it politically expedient to avoid endorsing any “conflict model” of the relationship between religion and science. Even if they are not religious themselves, many scientists welcome the NOMA principle as a tolerable compromise.

Some accommodationists argue for one or another very weak thesis: for example, that this or that finding of science (or perhaps our scientific knowledge base as a whole) does not logically rule out the existence of God (or the truth of specific doctrines such as Jesus of Nazareth’s resurrection from the dead). For example, it is logically possible that current evolutionary theory and a traditional kind of monotheism are both true.

But even if we accept such abstract theses, where does it get us? After all, the following may both be true:

1. There is no strict logical inconsistency between the essentials of current evolutionary theory and the existence of a traditional sort of Creator-God.


2. Properly understood, current evolutionary theory nonetheless tends to make Christianity as a whole less plausible to a reasonable person.

If 1. and 2. are both true, it’s seriously misleading to talk about religion (specifically Christianity) and science as simply “compatible”, as if science – evolutionary theory in this example – has no rational tendency at all to produce religious doubt. In fact, the cumulative effect of modern science (not least, but not solely, evolutionary theory) has been to make religion far less plausible to well-informed people who employ reasonable standards of evidence.

For his part, Coyne makes clear that he is not talking about a strict logical inconsistency. Rather, incompatibility arises from the radically different methods used by science and religion to seek knowledge and assess truth claims. As a result, purported knowledge obtained from distinctively religious sources (holy books, church traditions, and so on) ends up being at odds with knowledge grounded in science.

Religious doctrines change, of course, as they are subjected over time to various pressures. Faith versus Fact includes a useful account of how they are often altered for reasons of mere expediency. One striking example is the decision by the Mormons (as recently as the 1970s) to admit blacks into its priesthood. This was rationalised as a new revelation from God, which raises an obvious question as to why God didn’t know from the start (and convey to his worshippers at an early time) that racial discrimination in the priesthood was wrong.

It is, of course, true that a system of religious beliefs can be modified in response to scientific discoveries. In principle, therefore, any direct logical contradictions between a specified religion and the discoveries of science can be removed as they arise and are identified. As I’ve elaborated elsewhere (e.g., in Freedom of Religion and the Secular State (2012)), religions have seemingly endless resources to avoid outright falsification. In the extreme, almost all of a religion’s stories and doctrines could gradually be reinterpreted as metaphors, moral exhortations, resonant but non-literal cultural myths, and the like, leaving nothing to contradict any facts uncovered by science.

In practice, though, there are usually problems when a particular religion adjusts. Depending on the circumstances, a process of theological adjustment can meet with internal resistance, splintering and mutual anathemas. It can lead to disillusionment and bitterness among the faithful. The theological system as a whole may eventually come to look very different from its original form; it may lose its original integrity and much of what once made it attractive.

All forms of Christianity – Catholic, Protestant, and otherwise – have had to respond to these practical problems when confronted by science and modernity.

Coyne emphasizes, I think correctly, that the all-too-common refusal by religious thinkers to accept anything as undercutting their claims has a downside for believability. To a neutral outsider, or even to an insider who is susceptible to theological doubts, persistent tactics to avoid falsification will appear suspiciously ad hoc.

To an outsider, or to anyone with doubts, those tactics will suggest that religious thinkers are not engaged in an honest search for truth. Rather, they are preserving their favoured belief systems through dogmatism and contrivance.

How science subverted religion

In principle, as Coyne also points out, the important differences in methodology between religion and science might (in a sense) not have mattered. That is, it could have turned out that the methods of religion, or at least those of the true religion, gave the same results as science. Why didn’t they?

Let’s explore this further. The following few paragraphs are my analysis, drawing on earlier publications, but I believe they’re consistent with Coyne’s approach. (Compare also Susan Haack’s non-accommodationist analysis in her 2007 book, Defending Science – within Reason.)

At the dawn of modern science in Europe – back in the sixteenth and seventeenth centuries – religious worldviews prevailed without serious competition. In such an environment, it should have been expected that honest and rigorous investigation of the natural world would confirm claims that were already found in the holy scriptures and church traditions. If the true religion’s founders had genuinely received knowledge from superior beings such as God or angels, the true religion should have been, in a sense, ahead of science.

There might, accordingly, have been a process through history by which claims about the world made by the true religion (presumably some variety of Christianity) were successively confirmed. The process might, for example, have shown that our planet is only six thousand years old (give or take a little), as implied by the biblical genealogies. It might have identified a global extinction event – just a few thousand years ago – resulting from a worldwide cataclysmic flood. Science could, of course, have added many new details over time, but not anything inconsistent with pre-existing knowledge from religious sources.

Unfortunately for the credibility of religious doctrine, nothing like this turned out to be the case. Instead, as more and more evidence was obtained about the world’s actual structures and causal mechanisms, earlier explanations of the appearances were superseded. As science advances historically, it increasingly reveals religion as premature in its attempts at understanding the world around us.

As a consequence, religion’s claims to intellectual authority have become less and less rationally believable. Science has done much to disenchant the world – once seen as full of spiritual beings and powers – and to expose the pretensions of priests, prophets, religious traditions, and holy books. It has provided an alternative, if incomplete and provisional, image of the world, and has rendered much of religion anomalous or irrelevant.

By now, the balance of evidence has turned decisively against any explanatory role for beings such as gods, ghosts, angels, and demons, and in favour of an atheistic philosophical naturalism. Regardless what other factors were involved, the consolidation and success of science played a crucial role in this. In short, science has shown a historical, psychological, and rational tendency to undermine religious faith.

Not only the sciences!

I need to be add that the damage to religion’s authority has come not only from the sciences, narrowly construed, such as evolutionary biology. It has also come from work in what we usually regard as the humanities. Christianity and other theistic religions have especially been challenged by the efforts of historians, archaeologists, and academic biblical scholars.

Those efforts have cast doubt on the provenance and reliability of the holy books. They have implied that many key events in religious accounts of history never took place, and they’ve left much traditional theology in ruins. In the upshot, the sciences have undermined religion in recent centuries – but so have the humanities.

Coyne would not tend to express it that way, since he favours a concept of “science broadly construed”. He elaborates this as: “the same combination of doubt, reason, and empirical testing used by professional scientists.” On his approach, history (at least in its less speculative modes) and archaeology are among the branches of “science” that have refuted many traditional religious claims with empirical content.

But what is science? Like most contemporary scientists and philosophers, Coyne emphasizes that there is no single process that constitutes “the scientific method”. Hypothetico-deductive reasoning is, admittedly, very important to science. That is, scientists frequently make conjectures (or propose hypotheses) about unseen causal mechanisms, deduce what further observations could be expected if their hypotheses are true, then test to see what is actually observed. However, the process can be untidy. For example, much systematic observation may be needed before meaningful hypotheses can be developed. The precise nature and role of conjecture and testing will vary considerably among scientific fields.

Likewise, experiments are important to science, but not to all of its disciplines and sub-disciplines. Fortunately, experiments are not the only way to test hypotheses (for example, we can sometimes search for traces of past events). Quantification is also important… but not always.

However, Coyne says, a combination of reason, logic and observation will always be involved in scientific investigation. Importantly, some kind of testing, whether by experiment or observation, is important to filter out non-viable hypotheses.

If we take this sort of flexible and realistic approach to the nature of science, the line between the sciences and the humanities becomes blurred. Though they tend to be less mathematical and experimental, for example, and are more likely to involve mastery of languages and other human systems of meaning, the humanities can also be “scientific” in a broad way. (From another viewpoint, of course, the modern-day sciences, and to some extent the humanities, can be seen as branches from the tree of Greek philosophy.)

It follows that I don’t terribly mind Coyne’s expansive understanding of science. If the English language eventually evolves in the direction of employing his construal, nothing serious is lost. In that case, we might need some new terminology – “the cultural sciences” anyone? – but that seems fairly innocuous. We already talk about “the social sciences” and “political science”.

For now, I prefer to avoid confusion by saying that the sciences and humanities are continuous with each other, forming a unity of knowledge. With that terminological point under our belts, we can then state that both the sciences and the humanities have undermined religion during the modern era. I expect they’ll go on doing so.

A valuable contribution

In challenging the undeserved hegemony of religion/science accommodationism, Coyne has written a book that is notably erudite without being dauntingly technical. The style is clear, and the arguments should be understandable and persuasive to a general audience. The tone is rather moderate and thoughtful, though opponents will inevitably cast it as far more polemical and “strident” than it really is. This seems to be the fate of any popular book, no matter how mild-mannered, that is critical of religion.

Coyne displays a light touch, even while drawing on his deep involvement in scientific practice (not to mention a rather deep immersion in the history and detail of Christian theology). He writes, in fact, with such seeming simplicity that it can sometimes be a jolt to recognize that he’s making subtle philosophical, theological, and scientific points.

In that sense, Faith versus Fact testifies to a worthwhile literary ideal. If an author works at it hard enough, even difficult concepts and arguments can usually be made digestible. It won’t work out in every case, but this is one where it does. That’s all the more reason why Faith versus Fact merits a wide readership. It’s a valuable, accessible contribution to a vital debate.

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

Best-of lists

Jerry Coyne links to an article by Bim Adewunmi in The Guardian slagging off best-of lists – such as a recent list of greatest movies issued by the British Film Institute (it gave first place to Alfred Hitchcock’s Vertigo).

Like Jerry, I’ll paste in the objections made by Adewunmi, the reasons given for hating such lists:

• They remove originality of thought. Have you ever tried to compile a list of the best books of all time? Have you automatically written down any or all of these usual suspects – Dickens, Nabokov, Austen, or Woolf – without even realising? We’ve all done it. These authors and their many works are undoubtedly excellent, but is that the only reason they came to mind? No, they’ve been “normed” into your life. Who wants to be the lone wolf standing up in class and saying The Secret Dreamworld of a Shopaholic is their favourite book of all time when everyone else is nodding soberly along to Madame Bovary? Break free of the tyranny of lists! PS: the Shopaholic series is a delight.

• They kill joy. We’ve all used the clapping Orson Welles gif to punctuate Tumblr posts, sure, but have you ever watched all of Citizen Kane? All my life, I’ve been told it is the best thing my eyes will ever see. I have Citizen Kane fatigue. This is what lists do – when the hype gets too much, all joy is extracted from the endeavour. For example, I’m fairly obsessed with Buffy the Vampire Slayer. In previous years, I would wax lyrical about how amazing the show was, sitting people down and explaining – season by season – how layered and brilliantly conceived the show was, before pressing a box set into their hands, telling them: “Just watch it.” Inevitably, my overactive hype machine sucked all the joy from the situation. The simple pleasure of accidentally stumbling upon the magnificence was gone. The expectations are too high, the disappointment inescapable. These days, I’ve scaled back my enthusiasm. If people want to appreciate the wonder of a groundbreaking and perfectly pitched series that exquisitely explored the ideas of autonomy and feminism via a wisecracking teenager who battles supernatural beings, they will.

• They confirm your most depressing fear: you are desperately uncool. By definition, lists are exclusionary, separating the wheat from the perceived chaff. And while we all have views that might be considered a bit left field, we imagine those mark us out as cool mavericks, not social pariahs. But imagine the explicit confirmation that you’re wrong about everything – your favourite film, your most treasured book, your most beloved album. All wrong. Your very opinion: invalidated. No one wants that. The NHS couldn’t handle the strain of all the crushed egos.

Well, what do you think? I can’t take the last point very seriously, and the whole article seems to be a bit tongue-in-cheek. But is there something in any of these points?

I have to admit that I do often see movies or read books because I feel that … I should. But maybe that’s because I have some pretensions as a critic and feel the need to keep up in certain areas (and to be familiar with the acknowledged classics in those areas) partly out of fear of otherwise being charlatan. This might not affect other people, people with fewer pretensions, so much. But even if you do feel some pressure to know the classics, I’m not sure that’s such a bad thing. There’s something to be said for having at least some cultural consensus, however shifting and contestable, as to whatever the classics are in various art forms – isn’t there? The bit about shifting and contestable is important but all the same…

On the other hand, if I were asked exactly what should be said for this, I admit that I’d flounder around somewhat. I’d find it difficult to come up with an answer that’s both compelling and concise.

Jerry Coyne makes a good point almost in passing: “While taste is subjective, the taste of people who are regularly exposed to film and books, and think about them, tends to run along concurrent lines, and so it’s worth paying attention to their suggestions.” That sounds about right to me. But it raises an important issue for philosophers – are our measures of the “greatest” or the “best” objective in any sense?

I have a long-term interest in whether novels or plays, say, can be interpreted or evaluated objectively. What do we mean even when we make a simple claim such as that Iago is the villain of Othello? That claim sounds like an objective truth – Iago really is the villain, right? But is it really? Should we say that Iago is coded as a villain if you read or watch it in accordance with certain conventions, but that it may be open (in some sense) to people to reject those conventions? Perhaps we can’t make any sense of it without applying at least some of the conventions that we use to construe the action of plays, but perhaps it’s possible to throw out enough to interpret Othello against the usual grain, with Iago as the hero. Yes? No?

Even if that’s not possible, what if we start interpreting the play at a more abstract level – e.g. as a cautionary tale about jealousy? Don’t we need to rely on conventions that are more contestable? And if we are going to evaluate the play, won’t our evaluations depend on our interpretations, as well as on further criteria of evaluation that may not be binding on others?

What I want to say here is something along the lines that there are always institutional and subjective elements in the interpretation and evaluation of artistic works, and yet interpretation and evaluation are not merely arbitrary. There are going to be reasons why certain conventions and standards are more relevant than others, and why skilled critics, or at least those from similar backgrounds or with similar interests, are likely to converge to a great extent on the same interpretations and evaluations. If that’s right, a list produced by people who are generally regarded as competent critics in a particular field will probably contain works that will be valued by anyone who has internalised much the same conventions of interpretation and standards of evaluation. There is, however, always scope to challenge them, at least at the margins … though then again, perhaps you’re best able to do this if you actually understand them.

So, what do you think of such lists? Is there anything about them that’s objective? Are they just arbitrary? Do you, personally, find them of any value, or would you rather they all be cast into the sea?

[Pssst: My Amazon page]

Jerry Coyne writes back – about free will

Over at Why Evolution Is True, Jerry Coyne recently wrote a post responding to my earlier post on his piece in The Chronicle of Higher Education. This debate can go back and forth a lot, but let me clarify a few things at least.

I’ll start by pointing out nothing that I have said in this series of posts is meant to deny that there could be threats to the idea of free will. Although I’ve stated that I have compatibilist leanings, that does not mean that I’ve outright defended compatiblism (the idea that free will is compatible with determinism) let alone that I’ve defended the claim that we have free will, let alone that I’ve defended actually using free will talk.

The points I’ve been making have been a bit more subtle than that. I’ve mainly been pointing out difficulties in certain arguments against compatibilism, though occasionally I’ve pointed out problems with certain arguments for compatibilism, and I’ve even pointed out some problems with the claim, “You have free will” – problems that would exist even if determinism is not true.

As to the latter, even if determinism is not true, there are well-known arguments as to why a mere mix of occasional indeterminism with determinism is unlikely to give us free will if we otherwise lack it (Jerry alludes to this in the Chronicle, and I agree with his brief comment on it). Moreover, even if determinism is not strictly true, it is difficult to see how I could be responsible for my own character, desires, etc., all the way down. Coming up with a picture of how this could work that is both coherent and plausible seems very difficult. But if we are not responsible for our characters, desires, etc., all the way down, that might start to run afoul of notions of moral responsibility, depending on what our intuitions are about that. And if we question moral responsibility that might lead to our rejecting the idea of free will (this assumes a widely-argued claim that an act performed with free will must be one for which I am morally responsible). Note that I am not pressing this argument, and I’m not convinced by it. I mention it only to give an example of reasoning that I simply have not dealt with (at least in any concerted manner) in these posts.

Again, what if, perhaps based on findings from Freudian psychoanalysis, or perhaps simply based on experiments in social psychology, we come to think that our psyches are sufficiently riven and/or mysterious to us that it no longer makes much sense to talk about such things as our characters or our desires? Even such words as “we”, “I”, “us”, “our”, etc., might come to seem problematic. If the world is sufficiently like that, perhaps we (!) should abandon free will talk even in the most everyday sense. I tend to think that psychoanalysis is mainly bunk, but there’s much material in the social psychology literature that could give us pause. Furthermore, none of this concern requires that strict causal determinism operates.

So, I have not demonstrated that we have free will, or even attempted to do so. Perhaps, for all I’ve argued, we don’t have it even if determinism is false. Nor have I demonstrated that compatibilism is true, merely that some of the arguments against it are not especially compelling and even seem to contain fallacies of reasoning.

Another point that should be made to try to get all this a bit clearer is that I am not especially reluctant to concede that causal determinism is true to whatever extent is required for arguments based on it to go through (assuming the arguments have no other problems). So Jerry misreads me when he thinks that I accept determinism “only grudgingly”. On the contrary, it would make the whole debate simpler for me if we knew that determinism is true. I’m not temperamentally opposed to determinism. Furthermore, I think that it’s probably true enough for our purposes. However, I wanted to be careful to bracket off certain questions so that I am not arguing with people who say, “Determinism is not true in any event!” Recall that the six pieces I was discussing pretty much assumed determinism, so I was doing likewise. Being careful to state that I am assuming determinism, even though I am not claiming to be able to prove it, certainly in the posts concerned, is not being grudging. It’s just a matter of trying to limit the range of the arguments.

Finally, at this point, I don’t necessarily think the “could have acted (or perhaps chosen) otherwise” or “your choice could have been different” sort of definition of free will is a good one. Some philosophers argue that we have free will even in some situations where we can’t act otherwise.

However, I am prepared to accept something like this definition for the sake of argument, with the proviso that I think it becomes implausible if some unusual or technical definition is given to the word “can” and its cognates such as “can’t” and “could”. If we use these words in ordinary ways, perhaps they do bring out something in what is arguably one folk conception of free will (I won’t say the folk conception, because one theme of these posts is that the folk may not all have the same conceptions and intuitions, and that may even be a reason to use different terminology).

Having said that, however, compatiblism (the claim that determinism and the existence of free will are not contradictory) and compatibilist free will (the idea that we actually have free will of a kind that is compatible with determinism) still seem to be in reasonably good shape. Or at least they don’t seem to be in too much danger from the points made in the articles that I was discussing, i.e. the articles in the Chronicle.

In his new post, Jerry runs some of the arguments together and deals with many side issues. I can’t mop up all of them without this post becoming (more) inordinately long, so my silence on some points doesn’t signal assent. To be fair to him, he wants to deal with various matters that he raised in his Chronicle piece, whereas my own post was focused pretty much on the first paragraph of it.

One issue in the new post is that he seems to have an intuition that we can’t rightly blame someone for heinous actions such as failing to save a drowning child (when doing so would have been easy, etc.), based on the thought that the person who failed to save the child was not ultimately responsible for his/her own character, set of desires, etc.

Perhaps this intuition is right (although I doubt it) – and I didn’t attempt to deal with this argument in the earlier post. I did, however, use the scenario of the drowning child to demonstrate how we ordinarily use such words as “can” (“can’t”, “could”, etc.). Let’s return to that.

Perhaps Jerry wants to use the word “can” in a special sense, but if so the word becomes equivocal in its meaning. Normally, when we say, “I can save the child” or “I could have saved the child” we mean something slightly (but not very) vague to the effect that I have whatever cognitive and physical capacities are needed, have whatever equipment is required, am on the spot, and so on. Perhaps it includes not being in the grip of a disabling phobia and not being coerced by someone with a gun. “Can” refers to a commonsensical notion – slightly vague, but no more so than most ordinary language – of having the ability to do something.

If all this applies, but I fail to save the child (perhaps because I dislike children or because I don’t want to get wet, or because I am just too lazy), it makes still makes sense to say that (speaking tenselessly) I can save the child but I don’t do so because I don’t want to. Here, the ordinary meaning of “can” is being applied correctly to the situation. If Jerry’s argument demands throwing out this ordinary usage, it’s in all sorts of trouble. If he wants to use “can” and “could” in some other sense, apart from the ordinary one, in the context of free will talk, I see no reason to believe that his conception of free will is much like what the folk have in mind when they say, for example, “Russell acted of his own free will.” The empirical research done to date, e.g. by Eddy Nahmias and his colleagues, does not suggest that the folk, or the majority of them, have some special meaning of “can”, “could”, and “ability to act” in their minds.

Jerry says:

This statement leaves me completely baffled. When Russell says “I could, indeed, have chosen to do otherwise,” he seems to mean only, “had I been somebody other than Russell Blackford at that moment, I might have done otherwise.” And in what sense is that free will? It’s one thing for people to chastise somebody for making a “bad choice” (an emotion that feels natural but is at bottom irrational), but it’s a different thing to think that somebody actually can act in different ways at a single time.

But as I’ve said, if the person can (in the ordinary sense of “can”) save the child the first time round, the person can (in the same sense) also save the child the second time round. Jerry says in the original post:

To put it more technically, if you could rerun the tape of your life up to the moment you make a choice, with every aspect of the universe configured identically, free will means that your choice could have been different.

This is a bit confusing partly because of the tenses that Jerry uses. But think of it like this. If determinism is true and the tape is rerun, then I will act in exactly the same way whether I have free will or not. After all, why wouldn’t I? If the tape is rerun exactly, then I will have exactly the same abilities and exactly the same motivations, so why expect me to act differently, even if I have free will? This is just puzzling. Indeed, if I act differently on the replay of the tape, even though my abilities and motivations are exactly the same, that looks, if anything, as if we live in a world in which mysterious, spooky forces interfere with our lives – i.e. a world in which we don’t have free will!

If the way I acted the first time turned on my motivations (e.g. I don’t like children), then the way I act the second time will also turn on my (identical) motivations. Likewise when the tape is run the nth time, where “n” is some arbitrarily large number. If (speaking tenselessly) I can save the child the first time, then I can save the child in exactly the same way and in exactly the same sense the nth time. However I won’t do so. My failure to do so flows from my motivations, not from my abilities (or from the interference of something spooky such as the stars, the gods, or Fate).

Perhaps we don’t have free will. Although there are no spooky forces controlling us, someone might argue that, for example, we all have deeply disordant sub-conscious urges which play much the same role. As I mentioned above, there may be many worries about free will, and I haven’t tried to deal with them all. But none of this stuff about replaying tapes, and what would happen if we did so, is helpful to hard determinists like my friend Professor Coyne.

Jerry Coyne on free will

As I indicated yesterday, I am going to comment on the six pieces about free will published recently in The Chronicle of Higher Education. I’ll start with Jerry A. Coyne’s article entitled “You Don’t Have Free Will”.

Some preliminaries

This article contains points that I agree with (for example, that the expression “free will” is used in many ways or with many meanings) and points that I possibly agree with (for example, that we should drop free will talk). I do think it’s clear that many different definitions of “free will” are used, and I’m inclined to think that that, alone, might be a reason not to use the expression. It can mean that we are all just debating at cross-purposes.

At the same time, I wonder what the expression conveys to an ordinary person in ordinary discussion. Attempts to get that clear by the sort of conceptual analysis favoured by analytic philosophers don’t appear to me to have gone anywhere near settling this, and there doesn’t seem to be a lot of empirical research on the subject. To an extent, I am relying on a hunch here, but the difficulty that philosophers have had, historically, in defining “free will” makes me wonder whether the meaning of the expression is clear at all, unless a meaning is actually stipulated for the purpose of debate. In that case, we might frequently be talking past each other when we use the term.

I also suspect that the term has various connotations that are troubling. It may be that when I say, “You have free will”, I at least connote something rather spooky that is likely to be false. At the same time, if I say, “You do not have free will”, I may at least connote certain fatalistic or passivist ideas that are also likely to be false. So perhaps, if I want to avoid misleading people, I should avoid saying either of those things. (But I’d like to see some more empirical research on what these statements, “You have free will” and “You do not have free will”, actually do connote to people.)

So I can agree with Professor Coyne that we might do best to avoid the term “free will” … and try, instead, to make whatever points need to be made with other language. At the same time, my reasons are, I think, a bit different from his.

Are compatibilists just saving face?

I do not agree with him when he says the following:

Although science strongly suggests that free will of the sort I defined doesn’t exist, this view is unpopular because it contradicts our powerful feeling that we make real choices. In response, some philosophers — most of them determinists who agree with me that our decisions are preordained — have redefined free will in ways that allow us to have it. I see most of these definitions as face-saving devices designed to prop up our feeling of autonomy.

I don’t think there is any reason at all to believe that; it strikes me as overly cynical. I can report, in my own case, that my past (and certainly not entirely buried) tendency towards compatibilism is not at all a face-saving device of this kind. It is a sincerely held position based on the view that we retain certain capacities even if our decisions are the product of a causally more-or-less deterministic process. Furthermore, reflection on what is important that reasonably falls within the ambit of the free will debate leads me to think that the capacities we retain are very important.

These capacities include: the ability to deliberate; the ability, more specifically, to deliberate about what I most value or desire in a situation; the ability to shape my own future to an extent, as a result of my choices; and, more generally, the ability to affect the future of my society and my world, to an extent, as a result of my choices. Some people – certain fatalists and passivists – seem to deny the latter abilities, at least.

Consider “soft determinism”, which is perhaps best regarded as a sub-set of compatibilism (if compatibilism is regarded as something like the view that free will and determinism are logically compatible whether or not determinism is actually true). Soft determinism might be interpreted as the claim that these fatalists and passivists are wrong, even though causal determinism is more-or-less correct. If that’s a plausible interpretation of what soft determinists are trying to say, then soft determinism seems like a position that is at least arguable and that people could hold sincerely. Once again, I see no reason to believe that people who hold these sorts of positions are insincere or trying to change the subject. So I reject this talk of “face-saving” and so on.

The “couldn’t have acted/chosen otherwise” argument

Still, is the Coyne position correct to this extent: We don’t have free will in the sense defined by the article?

The first problem is that the article relies on the claim that we live in a more-or-less deterministic world, including at the level of the brain. Things could get a bit complicated if it turns out that the brain functions in an indeterministic way (to some important degree), and I’m not at all sure that the actual science accomplished to date rules this out. However, the science may be suggestive, and in any event I’m not opposed to the claim, either temperamentally or philosophically, so in what follows I’ll assume its truth for the sake of argument. The claim seems plausible enough to me, at least, even if not definitively established. For the sake of argument, then, let’s assume that the brain (along with everything else) functions deterministically to whatever extent is needed for Professor Coyne’s argument to go through.

Does this rule out free will? Well, that’s going to depend on our definition of free will, and I’ve argued that this is unclear and that different definitions may be used sincerely and reasonably. Still, what if we use the idea of:

At the moment when you have to decide among alternatives, you have free will if you could have chosen otherwise. To put it more technically, if you could rerun the tape of your life up to the moment you make a choice, with every aspect of the universe configured identically, free will means that your choice could have been different.

Even this is problematic. The idea of “could have chosen otherwise” (which some philosophers do, indeed, use as a definition of free will) is at best equivocal.

On one interpretation, to say that I could have chosen otherwise simply means that I would have been able to act differently if I’d wanted to. Say a child drowns in a pond in my close vicinity, and I stand by allowing this to happen. The child is now dead, and the child’s parents blame me for the horrible outcome. Will it cut any ice if I reply, “I couldn’t have acted (or couldn’t have chosen) otherwise?” No. They are likely to be unimpressed.

What more would I have needed to have been able to act otherwise? I was at the right place at the right time. I can swim. No special equipment that I lacked was actually needed … and so on. The parents are likely to reply that it’s not that I couldn’t have chosen to act otherwise, but that I merely didn’t want to act otherwise.

Surely there are many cases like this where the reason that I didn’t act otherwise was not any lack of capacity, equipment, being on the spot, etc., but merely that I didn’t want to act otherwise. The most salient thing determining how I acted was my desire-set. Leave everything else in place, but change my desire-set, and I would have acted otherwise. In those circumstances, it is true that I could have acted otherwise. In those circumstances, someone can rightly say to me: “It’s not that you couldn’t have acted otherwise; it’s that you didn’t want to.”

Suppose the tape is replayed. Suppose that determinism is sufficiently true that I end up making exactly the same decision for exactly the same reasons (I don’t want to get wet, I don’t like children and desire that as many as possible drown, or whatever my reasons might be). If determinism holds true to that extent (which, again, I am happy to stipulate), I’ll act in exactly the same way – speaking tenselessly, I don’t save the child. Professor Coyne says, and we’ll stipulate that he’s right: “free will means that your choice could have been different.”

But, Jerry, it could have been! It’s true that if the tape is replayed my choice will be the same. Putting it another way, it’s true that my choice wouldn’t be different if the tape were replayed. But the article is confusing wouldn’t with couldn’t. It’s a straightforward confusion of modality. As happened the first time, I could save the child in the perfectly familiar sense that I have whatever capacities, equipment, proximity, etc., are required. As happened the first time, the parents could and would rightly say to me, “It’s not that you couldn’t have; it’s that you didn’t want to.”

Now it’s true that my wants or values or goals, or whatever – my desire-set – may itself be determined causally. Indeed, I’m assuming throughout that this is so. I’m assuming (and I think this is reasonable, given the concessions I’ve made to causal determinism) that all these things are identical with states of my neurology that have a physical causal history. Perhaps that fact grounds some kind of argument against free will, if we imagine that free will involves some sort of ultimate capacity for self-creation. I agree that we don’t have free will – certainly on this picture – if “free will” means: “Free will all the way down.” Thus, on this picture, we don’t have free will of a kind that could be deployed in theodical arguments … my choices can be traced back eventually to the initial creative acts of God, if such exists.

But as long as the explanation as to why I didn’t act otherwise is just those states of my neurology – the ones that constitute my desire-set – the parents are quite right to complain that I could have chosen to do otherwise and saved their child. “You just didn’t want to,” they say, correctly. I was someone whose desire-set was such that I wouldn’t act otherwise in such circumstances, but I was not someone who couldn’t do so. Thus the “couldn’t act otherwise” argument, based on causal determinism, should not convince us that we lack free will. When I failed to save the child, I could, indeed, have chosen to do otherwise.

Jerry Coyne and Sam Harris on free will

Jerry Coyne has an interesting article on free will in USA Today and a follow-up post at Why Evolution Is True. It all seems to be triggered by the publication of a new book about free will by Sam Harris.

Both Harris and Coyne point-blank deny that free will exists. The USA Today piece is well worth reading, but the passage quoted from the Sam Harris book, in the post at WEIT, doesn’t impress me. In particular, I disagree about the “changing the subject” claim as a way of dismissing compatibilism. Come on, Dr Harris, that is a rhetorical tactic to put down thoughtful and intellectually honest opponents, rather than trying to appreciate the real strength of what they are saying. (Of course, you may not have any choice as to whether or not you argue like this.)

It’s just not at all clear that the original “subject” was some spooky power to act contrary to our own desires and whatever physical substrate they supervene on. In fact, I doubt that any serious philosopher thinks of it quite like that, and I doubt that ordinary people do either – though the attempts by some libertarians (in the sense relevant to this debate) to preserve our motivations while giving us a radical power to act independently of the causes that shaped our personalities are, indeed, sometimes baffling. It seems that they want to have it both ways, which places them in danger of saying something incoherent. As for what ordinary people think about all this … well, it’s likely to be very confused.

I agree that there is no free will in any spooky libertarian sense. I don’t think the idea can be rendered coherent, whether physical determinism (at the level of the brain’s functioning, say) is true or not. But this is all a very modern way of thinking about it. It may be what’s bugging some people, but historically the questions were more along the lines of: “Am I a plaything of fate or destiny or necessity or mere chance or the will of the gods?” “Is it rational to deliberate about what I do, if the outcome is fated anyway (or, conversely, a matter of mere luck)?” “Are my attempts to shape my own life and to make a difference to the world all futile?” These are the questions that are at stake in the traditions of myth, literature, and even, to a large extent, philosophy.

Even now, much popular fiction involves themes of, “Can I overcome my destiny?” “Can I forge a better life for myself?” This kind of thing, arguably, is what gnaws at ordinary people outside of any formal theological or philosophical context. Do we have the power (or some power, at least) to shape our lives? It seems obvious that we do, or why bother making decisions at all (unless we simply can’t help it, right?). Why deliberate about what career to pursue, if it’s all controlled by God or the stars, anyway? But we do, ordinarily, think it’s worthwhile deliberating about what career to pursue, what skills to develop, etc. Deliberating certainly doesn’t seem irrational or futile.

This obvious appearance is challenged by various plausible-looking arguments, ranging from arguments about the foreknowledge of God, to arguments about physical determinism, to arguments about living in an Einsteinian block universe, to arguments based on the law of excluded middle (after all, all statements about the future are either true or false … aren’t they?). And doubtless many others. These arguments suggest that our sense of having some ability to shape our own lives is an illusion. That is exactly what Harris and Coyne think it is.

Well, perhaps one of those arguments works, but if you’re going to show why they probably don’t, and why the everyday appearance that we can make decisions, act on the world, and, to some extent, shape our own future lives, is not just an illusion after all … well of course you’re going to have to do what philosophers do. I.e. we make distinctions, try to clarify issues, etc. That isn’t arguing in a contrived or dishonest way, or “changing the subject”. It’s our job. It’s how we earn our supper.

When philosophers try to clarify, and perhaps dissolve, these concerns, showing, perhaps, that the concerns don’t make good sense on closer analysis, we are playing a time-honoured role. Indeed, the Stoics (or certain of them) gave a “compatibilist” answer to the question of whether outcomes can be up to us, in some sense – despite there also being some truth about what we will decide – way back in Hellenistic and Roman times.

The issue of free will in the specific sense that I mentioned in the third paragraph above becomes important in debates about whether God could be absolved of responsibility for evil actions by us. If some sort of spooky free will exists, it’s thought by some theologians and philosophers that this creates a gap between the creative activity of God and the evils perpetrated by us, thus solving the ancient problem of evil.

Others may try to argue for spooky free will in an effort to preserve moral responsibility. They think that we can’t be (morally) responsible for our actions unless we are somehow responsible for them all the way down. Thus, they want to create a gap, not between our actions and God but between our actions and whatever events formed us as we are. Indeed, this issue has become central in the contemporary debate about free will among professional philosophers. It’s now largely a debate about whether and when we are responsible for our own actions.

But once again, compatibilists who are involved in this debate are not engaging in any dishonest or contrived reasoning. It is strongly arguable that no spooky gap between us and the events that formed us is required for us to, quite rationally, hold each other responsible for our choices and actions. You may disagree with this, but it’s inevitable that a question like that is going to require both sides to engage in attempts at conceptual clarification. This is not “changing the subject”.

Jerry Coyne rightly points to these – i.e. theodical reasoning and arguments about moral responsibility – as two areas of discourse where a spooky gap is invoked. Since he evidently thinks that spooky gaps are needed for moral responsibility, he denies, if I read him correctly, that we have moral responsibility.

Let’s set aside the theodical arguments. I agree that the free will defence is unpromising as a solution to the problem of evil. But what about (moral) responsibility? Surely getting all this clear requires that we examine what the concept really amounts to – and that is a non-trivial exercise in conceptual analysis, since the concept of (moral) responsibility, as it appears in everyday discussion, does not look straightforward, or even coherent, and it is tied up with many other difficult concepts, such as concepts of fairness, justice, and desert. There’s conceptual work to do here, and the best approach is simply not obvious.

Can Atheism Be Proven Wrong?

A friendly debate has come up between the atheists Jerry Coyne and PZ Myers. The question under debate is, “Can atheism be proven wrong?” On the one hand, Jerry Coyne has argued that his atheism is, and should be, capable of being defeated by evidence. On the other hand, PZ Myers has argued that religious claims are incoherent, and so it’s pointless trying to refute them in that way. Even if seemingly divine events did happen, we could explain them as hallucinations, or of the intervention of aliens — there’s no need to talk about God.

On behalf of Team Coyne, Greta Christina has argued that Myers is right to say that religious claims are bullshit, but that Coyne is right to insist that atheism can be defeated by evidence. However, on behalf of Team Myers, Diaphanitas has argued that Christina has missed the point: if you think that religious claims are incoherent, then you can’t think that they can be defeated by the evidence. In order for a claim to be capable of being defeated by evidence, it has to be a coherent claim in the first place. (Edit: Or, at least, that’s the cliff’s notes version. I’m going to be a naughty blogger by not giving more of a summary than that. If you’re interested in the full conversation, click the links above.)

I’ll argue that Christina is right, hoping to score points for Team Coyne, and hopefully be the hero to capture Team Myers’s filthy squid-adorned flag. Specifically, I’ll be arguing against some of Diaphanitas’s core claims. (I’ll avoid the stuff about NOMA, because I want to avoid complaints of tl;dr.) In other words, some interpretations of atheism and theism can both be shown to be wrong according to the evidence, and that’s the only point worth making.


The sticking point between Christina and Diaphanitas is what I’ll call “the semantic principle of bullshit”. Since religious claims on the whole do not hold themselves to common standards of evidence, we have to say that religious sentences are epistemically unstable. Hence, they’re not the sorts of things that can or should be evaluated in terms of evidence.

And it seems to me that, as a matter of fact, the principle of bullshit is correct — religious sentences, when taken on the whole, don’t know whether they’re coming or going. (It doesn’t matter to my argument if you don’t agree; you can just assume it for the sake of seeing my point.) Since atheism is the rejection of theism, endorsements of atheism have an equally small burden. As Hitchens says: “What can be asserted without evidence, can be rejected without evidence.”

Unlike Diaphanitas, I don’t think the principle of bullshit makes any difference to Christina’s point. For bullshit claims can be plausibly interpreted in a literal way, if our aim is to understand the intentions and beliefs of some mainstream religious persons. It seems to me that the only way to defeat a bullshit claim is for us to round up all of the most plausible interpretations of the claim, and then show how each interpretation is false. Hence, you have to refute every plausible use of the sentence: by treating it as a God Hypothesis, and then as an allegory, and then as an expression of self-assertion, and so on.

So that will mean that eventually atheists will have to get around to showing that the best explanation of the evidence does not include reference to any Gods, and hence theistic claims are improbable. In other words: atheists will have to make the argument that Richard Dawkins makes in the first half of the God Delusion (or something like it). And to the extent that you’re arguing in terms of facts, you must also think of yourself as open to criticism on the basis of the evidence. As far as I can tell, this doesn’t mean that atheists like Coyne and Christina are “obsessed with the evidence”. It means that they insist that the examination of the evidence is essential when you’re in the business of interpreting sober, factual claims. If that’s an obsession, it’s a healthy one, as Diaphanitas admits.


So where’s the beef? Evidently, it has something to do with paradigms.

Diaphanitas thinks that evidence plays a limited role in the history of science (and hence, presumably, an even more limited role in the history of atheism and religion). For Diaphanitas, Thomas Kuhn‘s historiography of science is the best way of understanding the relationship between evidence and scientific change.

The spectre of Thomas Kuhn rises often, but it really needs to behave itself when it does. For while it’s true that Kuhn thought that a change in worldview involved a kind of “conversion” or “theory choice”, it’s also true that Kuhn argued that “objectivity ought to analyzable in terms of criteria like accuracy and consistency”. On my reading of Kuhn, these virtues were necessary for scientific practice, though not sufficient. If this means Kuhn was “begging the doxastic question”, then let’s also blame him for getting us to care so much about accuracy.

Diaphanitas, like Kuhn, wants to say that we’re doing more than just consulting the evidence — we’re making a choice, too. That’s fine — but it’s also a very weak claim, and it is consistent with the idea that evidence has to play a central role in scientific inquiry (and factual discourse). To my knowledge, there is nothing in Kuhn that helps us to say that religious claims in the 21st century world are plausible candidate explanations of the evidence. (As survivors of the Great Lisbon Earthquake could tell us, the Argument from Design is simply not consistent with the evidence.) And when you argue in favor of the Abrahamic God using the Argument from Design, you are committing yourself to a kind of game that involves checking the facts — those are the rules that the proponents of the Watchmaker God are committed to. In that sense, contrary to Diaphanitas’s claim, the naturalist and the Watchmaker God are “in the same playing field”. They’re both responsive to the evidence.


Still, Myers and Diaphanitas are correct in the following sense. If the principle of bullshit is right, then that means that it is wrong to think that religious claims must be read as expressions of a kind of unique content. So, any theists who say “The Bible is just an allegory” are wrong, and any who say “The Bible must be taken literally” are wrong too. It’s either one, and more besides. The argumentative atheist has to use the shotgun method, taking aim at one interpretation after the other.

The moral of the story is this. Just because religious claims are unstable, doesn’t mean that the uses of the claims have to be up in the air. One use of religious claims involves the Argument from Design; and the argument from design is perfectly coherent, perfectly stable, and perfectly worthless. Hence, any atheism concerned with the Abrahamic Watchmaker God is supported on the basis of the evidence. If evidence turned the other way — e.g., if a credible argument could be made that the problem of evil was just a pseudo-problem — then the only responsible option for a Watchmaker critic would be to reconsider their atheism.

*Edited for clarity.

Enhanced by Zemanta