Category Archives: Science

The Atheist’s Guide to Reality: An Interview with Alex Rosenberg

[Originally published February  2012]

Reality, notes philosopher Alex Rosenberg, is “completely different from what most people think… stranger than even many atheists recognize.”   And having spent some 40 years trying to work out “exactly how advances in biology, neuroscience and evolutionary anthropology, fit together with what physical science has long told us” Professor Rosenberg seems well placed to judge. Thinking seriously and unsentimentally about the nature of reality and life’s ‘persistent questions’ has led the R. Taylor Cole Professor of Philosophy at Duke University to some striking, disconcerting and far-reaching conclusions.  In The Atheist’s Guide to Reality: Enjoying Life Without Illusions, Rosenberg aims to stretch out just what the atheist’s attachment to science really commits him to.

The author of some 14 books and an eminent philosopher of science, Professor Rosenberg has been kind enough to answer some questions from Talking Philosophy about his controversial and challenging work.  The questions posed, and Professor’s Rosenberg’s replies to them have been posted in full ‘as is’. Readers will, I hope, find something in the following to stimulate both thought and discussion

Your book is aimed squarely at atheists, but it’s not a book about atheism as such, rather it’s a book about what atheists should believe.  What are the most important things that the atheist needs to know about reality? And can he really enjoy life without illusions?

The most important thing to know about reality is that science understands it well enough to rule out god, and almost everything else that provides wiggle room for theism and mystery mongering. That includes all kinds of purposes, including even ones that conscious introspection suggests we ourselves have. Conscious introspection was shaped by natural selection into tricking us about the nature of reality. We need always to be on our scientific guard against its meretricious temptations. Treating the illusions that rise to consciousness as symptoms, instead of guides to meaning and value, is crucial to enjoying life. It’s not easy, but taking science seriously is the first step, despite the difficulty consciousness puts in the way of understanding it.

 

You note early on that “the effort to argue most people out of religious belief was doomed by the very Darwinian forces that the most fervent of Christians deny.”  Does evolution select for superstition and conspiracy theories? And how can they be dispelled?

Getting us from the bottom of the food chain on the African savannah to the top required mother nature (a.k.a. natural selection) to solve several design problems. Its quick and dirty solutions included ones that exaggerated our tendency to see conspiracies—plots in which there is a motive behind every event in nature. That’s what made religious belief unavoidable. It’s why religion is almost universal. Can these false beliefs be dispelled? Probably not completely, and probably not at all for people who have trouble understanding science.

Are introspection and common sense the greatest obstacles to understanding and accepting reality?

Introspection? Yes. Common sense, no.  For reasons just mentioned, we were shaped to be suckers for a good story, a narrative with a plot driven by motives—peoples’, god’s, nature’s. By making us think that our own behaviour is directly understandable to us as the product of our (usually conscious) will, introspection effectively prevents us from discovering its true sources in non-conscious brain processes. Add to that the fact that scientific theories of human behaviour (and everything else) are much harder to understand just because they don’t involve narratives and plots, and the obstacles to understanding erected by conscious thought become obvious.

Common sense is another matter, however. Science is just the result of 400 years of common sense recursively reconstructing itself, weeding out false hypotheses and introducing better ones. The result of course is quantum mechanics, Darwinian theory, neuroscience—common sense reshaped into something that most people can’t understand because they don’t have the patience and mathematical ability to work their way through the details.

What is your conception of ‘scientism’ and why have you ‘reclaimed’ the term?

My conception of scientism is almost the same as that of those who use it as a term of abuse. They use the term to name the exaggerated and unwarranted confidence that science and its methods can answer all meaningful questions. I agree with that definition except for the ‘exaggerated’ and ‘unwarranted’ part.

 

You seem strongly committed to a form of physicalist reductionism – not eliminativism – perhaps you could say a little more about that and some of the misconceptions surrounding it?

To use some philosophical jargon, I am an eliminativist about the propositional attitudes. That is, I believe that the brain acquires, stores, and uses information, but that it does not do so in the form of sentences, statements or propositions. The illusion that it does so is another one of those mistakes foisted on us by conscious awareness. The eliminativist thesis I just expressed will sound abstract and inconsequential to many people, and completely incoherent to many philosophers. In The Atheist’s Guide to Reality I explain why it’s true and what its huge upshot for theism and mystery mongering is. But I don’t deal with the philosophers charge that the denial we think in statements about the world is incoherent. That’s a task for an academic paper. Suffice it to say that neuroscience forces us to be eliminativist about some things consciousness foists on us, but it does not deny the reality of sensations, emotions or for that matter cognition—properly understood. It’s scientism that mandates the reductive explanation of all three, and that neuroscience is well on its way to providing.

 

You are strongly committed to the view that “the methods of science are the only reliable way to secure knowledge of anything”? What would you say to those who would suggest that the methods of science can give us no knowledge about mathematics and what it is like to see red?

What I say in response to such sophisticated philosophical challenges is first, like all the other metaphysical and epistemological alternatives, scientism does not yet have a satisfactory account of mathematics or our understanding of it; second, the so-called “hard problem” of consciousness—what its like to have a qualitative experience—is a sign post along the research program of neuroscience. It will eventually have to dissolve this problem, just as physics eventually had to dissolve Zeno’s paradox of motion. Meanwhile, if I have to weigh the achievements of science in the balance against the problems of the philosophy of mathematics and the first person point of view, I’ll choose science. 400 years of ever-increasing depth and breadth in explanation and prediction carries a lot more weight with me than a handful of philosophical conundrums and Platonism about mathematics.

 

You assert that “science’s description of the world is correct in its fundamentals; and that when ‘complete’ what science tells us will not be surprisingly different from what it tells us today.” Perhaps you could say something about those fundamentals, why you think they are unassailable and how much can be derived from them?

I argue in The Atheist’s Guide that all the science we need to answer the “persistent questions” that keep most thoughtful people up at night, are physics’ rejection of final causes, entelechies, prior designs in nature, along with the 2d law of thermodynamics. Those two are enough to give us natural selection, and together with them it is enough to solve all the other problems most people have about reality, the meaning of life, the nature of the mind, free will, ethics and the trajectory of human history.

But these established parts of science are of course not enough to answer all the scientific questions about these matters. To answer the questions of science (quite different from the limited questions of philosophy that people commonly ask themselves and their religious “advisers”) requires all the rest of science, including the parts that are still subject to development, change, revision, and even in a few cases, revolution. But nothing at the frontiers of any science is going to overturn the 2d law of thermodynamics, natural selection or the basic molecular biology of the neuron.

Is the fallibility of science a weakness in your argument or one of its strengths?

Science is common sense recursively reconstructing itself.  The reconstruction reflects the fallibility of common sense. Insistence by science on the tentativeness of its results at its ever-shifting research frontier, is what gives us confidence that after repeated test the parts most distant from that frontier are unlikely to be called into question.

The recurring dictum of your book is that ‘the physical facts fix all the facts’, what do you mean by that and how hard is it to persuade people of it?

Nothing more than this: take a time slice of any chunk of the universe—say, our planet, or solar system, or galaxy. Now produce a perfect—fermion for fermion, boson for boson—physical duplicate of that chunk at that moment. Then, everything that is true about what is going on in that first chunk, including all of the biological, psychological, sociological, political, economic, and cultural facts about it, will be true at the second, duplicate chunk.

I don’t know how hard it is to persuade people of this. It’s probably impossible to persuade many people once they realize it deprives their worlds of physically irreducible features.

Many of your readers may be amenable, in principle, to your contention that there is “no chance” of freewill. But few it seems can fully come to terms with the fact. Is freewill an illusion that is here to stay? Do you think that accepting that it is an illusion could change our behaviour and would you want it to?

Realizing there is no free will is unlikely to change our day-to-day behaviour, especially not our penchant for blaming people, and praising dogs for that matter. But it could change our politics a bit. In The Atheist’s Guide I argued that the core morality mother nature imposed on us together with the denial of free will is bound to make the consistent thinker sympathetic to a left-wing, egalitarian agenda about the treatment of criminals and of billionaires.

 

You assert that “scientism dictates a thoroughly Darwinian understanding of humans and of our evolution—biological and cultural” and that this means that “when it comes to ethics, morality, and value, we have to embrace an unpopular position that will strike many people as immoral as well as impious.” Just how bad is the news about morality? And why do you think “new atheists” like Sam Harris and Daniel Dennett can’t accept it?  

Second question first. Nihilism—even my “nice nihilism” is a public relations nightmare. Most of my fellow travellers think that if the scientific worldview saps morality of its truth, correctness, justification, then there is no chance it will be widely adopted and every chance the scientific worldview will be marginalized, to the obvious detriment of human welfare. They might be right. It’s an empirical matter. Answer to first question immediately below.

What‘s the ‘good news’ about nihilism? Does evolution select for niceness?

The good news is that natural selection has shaped almost all of us to be nice enough to make human social life possible. It had to. Without such shaping of social life, human life on the African savannah, and since then for that matter, would have been impossible. We are too puny to survive otherwise (even given our monstrously big brains).

Do you think accepting ‘nilhism’ will change how we act?  Can ‘nilhism’ be ‘reclaimed’ or do you think we will need a new way of talking about ‘morality’?

No. The correct philosophical theory has almost no capacity to overwhelm two million years or more of natural selection. Insofar as we pursue human sciences, nihilism is inevitable, but the label has too many disturbing connotations to stick.

Understandably you take there to be no purpose to the universe. But it seems you want to make a much stronger and more radical claim – that there are no purposes in the universe. Could you say something about just how wrong we are about cognition and consciousness?

The four most difficult chapters of The Atheist’s Guide are devoted to this task, and most reviewers have avoided even discussing them. They are too hard for people who have never heard of the problem of intentionality or content or ‘aboutness.’ Once we take on board eliminativism about content, and Darwinism about every other instance of apparent purposiveness in the universe and in our brains, it’s easy to see that what consciousness tells us about ourselves, our motives, our plans, our purposes, is a tissue of illusions. This, not morality, is the part of our understanding of ourselves that requires radical reconstruction, at least for scientific purposes, if not for everyday life.

In your book you make the striking claim that “Ultimately, science and scientism are going to make us give up as illusory the very thing conscious experience screams out at us loudest and longest: the notion that when we think, our thoughts are about anything at all, inside or outside of our minds.” As you admit this seems an absurd claim. Whilst, your detailed arguments for this position form a difficult and lengthy part of your book, could you give some small sketch of your grounds for making such a claim?

I started on that task in my answer to the last question. The best I can do in a few lines to answer the question further is to note that if intentionality, content, ‘aboutness,’ is impossible, given the way the brain works, it’s also impossible in consciousness—since that’s just more brain process. So, we need an explanation of the illusion that our conscious thoughts have sentential meaning and propositional content. Neuroscience explains why there is no original intentionality, along with no derived intentionality, in the brain. I show that adding consciousness doesn’t help in any way to create original intentionality. The argument is pretty simple once you grant that non-conscious brain states lack original intentionality because they can’t carry around information in the form of sentences.

 

Ultimately what would the success of your arguments mean for the importance of history, the social sciences, literature and the humanities?   And what would it mean for philosophy? 

My arguments turn the humanities and the interpretative social sciences, especially history, into entertainments. They can’t be knowledge, but they don’t have to be in order to have the greatest importance—emotional, artistic, but not epistemic—in our lives. As for philosophy, done right it’s just very abstract and very general science.

Those interested in finding out more about Professor Rosenberg’s position are pointed towards this piece as written for the New York Times in response to an article by Oxford’s Timothy Williamson who in turn replies critically to Rosenberg here. A further final exchange between the two can be found here. Professor Rosenberg also published a detailed précis of his book that can be found here at the ‘On The Human’ project – it is followed by critical responses from a number of noted philosophers (including Brian Leiter) to whom Rosenberg in turn replies. More recently, Rosenberg published a further piece at the same site titled ‘Final Thoughts of a Disenchanted Naturalist‘.

Update: Massimo Pigliucci, philosopher at the City University of New York, has reviewed ‘The Atheist’s Guide’  for TPM , Philip Kitcher, John Dewey professor of philosophy at Columbia University, has reviewed it for the New York Times and Michael Ruse, Lucyle T. Werkmeister Professor at Florida State University, has written a critical commentary on the book published over at Rationally Speaking.

[Further resources –  ‘The mad dog naturalist’- Alex Rosenberg interviewed by Richard Marshall  for 3am magazine [longer read with the latter showing how interviews can be better done].  Alex in conversation with Ard Louis and David Malone for the ‘Why Are We Here?’ documentary series (43 minute video plus transcript and other resources at the same site). And, for the more ambitious, a difficult academic paper by Alex aiming to show why eliminative materialism, isn’t as many suggest, self defeating – “Eliminativism without Tears” .].

 

Engineering Astronauts

Cover of "Man Plus"

If humanity remains a single planet species, our extinction is all but assured—there are so many ways the world could end. The mundane self-inflicted apocalypses include such things as war and environmental devastation. There are also more exotic dooms suitable for speculative science fiction, such as a robot apocalypse or a bioengineered plague. And, of course, there is the classic big rock from space scenario. While we will certainly bring our problems with us into space, getting off world would dramatically increase our chances of survival as a species.

While species do endeavor to survive, there is the moral question of whether or not we should do so. While I can easily imagine humanity reaching a state where it would be best if we did not continue, I think that our existence generates more positive value than negative value—thus providing the foundation for a utilitarian argument for our continued existence and endeavors to survive. This approach can also be countered on utilitarian grounds by contending that the evil we do outweighs the good, thus showing that the universe would be morally better without us. But, for the sake of the discussion that follows, I will assume that we should (or at least will) endeavor to survive.

Since getting off world is an excellent way of improving our survival odds, it is somewhat ironic that we are poorly suited for survival in space and on other worlds such as Mars. Obviously enough, naked exposure to the void would prove fatal very quickly; but even with technological protection our species copes poorly with the challenges of space travel—even those presented by the very short trip to our own moon. We would do somewhat better on other planets or on moons; but these also present significant survival challenges.

While there are many challenges, there are some of special concern. These include the danger presented by radiation, the health impact of living in gravity significantly different from earth, the resource (food, water and air) challenge, and (for space travel) the time problem. Any and all of these can prove to be fatal and must be addressed if humanity is to expand beyond earth.

Our current approach is to use our technology to recreate as closely as possible our home environment. For example, our manned space vessels are designed to provide some degree of radiation shielding, they are filled with air and are stocked with food and water. One advantage of this approach is that it does not require any modification to humans; we simply recreate our home in space or on another planet. There are, of course, many problems with this approach. One is that our technology is still very limited and cannot properly address some challenges. For example, while artificial gravity is standard in science fiction, we currently rely on rather ineffective means of addressing the gravity problem. As another example, while we know how to block radiation, there is the challenge of being able to do this effectively on the journey from earth to Mars. A second problem is that recreating our home environment can be difficult and costly. But, it can be worth the cost to allow unmodified humans to survive in space or on other worlds. This approach points towards a Star Trek style future: normal humans operating within a bubble of technology. There are, however, alternatives.

Another approach is also based in technology, but aims at either modifying humans or replacing them entirely. There are two main paths here. One is that of machine technology in which humans are augmented in order to endure conditions that differ radically from that of earth. The scanners of Cordwainer Smith’s “Scanners Live in Vain” are one example of this—they are modified and have implants to enable them to survive the challenges of operating interstellar vessels. Another example is Man Plus, Frederik Pohl’s novel about a human transformed into a cyborg in order to survive on Mars. The ultimate end of this path is the complete replacement of humans by intelligent machines, machines designed to match their environments and free of human vulnerabilities and short life spans.

The other is the path of biological technology. On this path, humans are modified biologically in order to better cope with non-earth environments. These modifications would presumably start fairly modestly, such as genetic modifications to make humans more resistant to radiation damage and better adapted to lower gravity. As science progressed, the modifications could become far more radical, with a complete re-engineering of humans to make them ideally match their new environments. This path, unnaturally enough, would lead to the complete replacement of humans with new species.

These approaches do have advantages. While there would be an initial cost in modifying humans to better fit their new environments, the better the adaptations, the less need there would be to recreate earth-like conditions. This could presumably result in considerable cost-savings and there is also the fact that the efficiency and comfort of the modified humans would be greater the better they matched their new environments. There are, however, the usual ethical concerns about such modifications.

Replacing homo sapiens with intelligent machines or customized organisms would also have a high initial startup cost, but these beings would presumably be far more effective than humans in the new environments. For example, an intelligent machine would be more resistant to radiation, could sustain itself with solar power, and could be effectively immortal as long as it is repaired. Such a being would be ideal to crew (or be) a deep space mission vessel. As another example, custom created organisms or fully converted humans could ideally match an environment, living and working in radical conditions as easily as standard humans work on earth. Clifford D. Simak’s “Desertion” discusses such an approach; albeit one that has unexpected results on Jupiter.

In addition to the usual moral concerns about such things, there is also the concern that such creations would not preserve the human race. On the one hand, it is obvious that such beings would not be homo sapiens. If the entire species was converted or gradually phased out in favor of the new beings, that would be the end of the species—the biological human race would be no more. The voice of humanity would fall silent. On the other hand, it could be argued that the transition could suffice to preserve the identity of the species—a likely way to argue this would be to re-purpose the arguments commonly used to argue for the persistence of personal identity across time. It could also be argued that while the biological species homo sapiens could cease to be, the identity of humanity is not set by biology but by things such as values and culture. As such, if our replacements retained the relevant connection to human culture and values (they sing human songs and remember the old, old places where once we walked), they would still be human—although not homo-sapiens.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Against accommodationism: How science undermines religion

Faith versus Fact
There is currently a fashion for religion/science accommodationism, the idea that there’s room for religious faith within a scientifically informed understanding of the world.

Accommodationism of this kind gains endorsement even from official science organizations such as, in the United States, the National Academy of Sciences and the American Association for the Advancement of Science. But how well does it withstand scrutiny?

Not too well, according to a new book by distinguished biologist Jerry A. Coyne.

Gould’s magisteria

The most famous, or notorious, rationale for accommodationism was provided by the celebrity palaeontologist Stephen Jay Gould in his 1999 book Rocks of Ages. Gould argues that religion and science possess separate and non-overlapping “magisteria”, or domains of teaching authority, and so they can never come into conflict unless one or the other oversteps its domain’s boundaries.

If we accept the principle of Non-Overlapping Magisteria (NOMA), the magisterium of science relates to “the factual construction of nature”. By contrast, religion has teaching authority in respect of “ultimate meaning and moral value” or “moral issues about the value and meaning of life”.

On this account, religion and science do not overlap, and religion is invulnerable to scientific criticism. Importantly, however, this is because Gould is ruling out many religious claims as being illegitimate from the outset even as religious doctrine. Thus, he does not attack the fundamentalist Christian belief in a young earth merely on the basis that it is incorrect in the light of established scientific knowledge (although it clearly is!). He claims, though with little real argument, that it is illegitimate in principle to hold religious beliefs about matters of empirical fact concerning the space-time world: these simply fall outside the teaching authority of religion.

I hope it’s clear that Gould’s manifesto makes an extraordinarily strong claim about religion’s limited role. Certainly, most actual religions have implicitly disagreed.

The category of “religion” has been defined and explained in numerous ways by philosophers, anthropologists, sociologists, and others with an academic or practical interest. There is much controversy and disagreement. All the same, we can observe that religions have typically been somewhat encyclopedic, or comprehensive, explanatory systems.

Religions usually come complete with ritual observances and standards of conduct, but they are more than mere systems of ritual and morality. They typically make sense of human experience in terms of a transcendent dimension to human life and well-being. Religions relate these to supernatural beings, forces, and the like. But religions also make claims about humanity’s place – usually a strikingly exceptional and significant one – in the space-time universe.

It would be naïve or even dishonest to imagine that this somehow lies outside of religion’s historical role. While Gould wants to avoid conflict, he creates a new source for it, since the principle of NOMA is itself contrary to the teachings of most historical religions. At any rate, leaving aside any other, or more detailed, criticisms of the NOMA principle, there is ample opportunity for religion(s) to overlap with science and come into conflict with it.

Coyne on religion and science

The genuine conflict between religion and science is the theme of Jerry Coyne’s Faith versus Fact: Why Science and Religion are Incompatible (Viking, 2015). This book’s appearance was long anticipated; it’s a publishing event that prompts reflection.

In pushing back against accommodationism, Coyne portrays religion and science as “engaged in a kind of war: a war for understanding, a war about whether we should have good reasons for what we accept as true.” Note, however, that he is concerned with theistic religions that include a personal God who is involved in history. (He is not, for example, dealing with Confucianism, pantheism or austere forms of philosophical deism that postulate a distant, non-interfering God.)

Accommodationism is fashionable, but that has less to do with its intellectual merits than with widespread solicitude toward religion. There are, furthermore, reasons why scientists in the USA (in particular) find it politically expedient to avoid endorsing any “conflict model” of the relationship between religion and science. Even if they are not religious themselves, many scientists welcome the NOMA principle as a tolerable compromise.

Some accommodationists argue for one or another very weak thesis: for example, that this or that finding of science (or perhaps our scientific knowledge base as a whole) does not logically rule out the existence of God (or the truth of specific doctrines such as Jesus of Nazareth’s resurrection from the dead). For example, it is logically possible that current evolutionary theory and a traditional kind of monotheism are both true.

But even if we accept such abstract theses, where does it get us? After all, the following may both be true:

1. There is no strict logical inconsistency between the essentials of current evolutionary theory and the existence of a traditional sort of Creator-God.

AND

2. Properly understood, current evolutionary theory nonetheless tends to make Christianity as a whole less plausible to a reasonable person.

If 1. and 2. are both true, it’s seriously misleading to talk about religion (specifically Christianity) and science as simply “compatible”, as if science – evolutionary theory in this example – has no rational tendency at all to produce religious doubt. In fact, the cumulative effect of modern science (not least, but not solely, evolutionary theory) has been to make religion far less plausible to well-informed people who employ reasonable standards of evidence.

For his part, Coyne makes clear that he is not talking about a strict logical inconsistency. Rather, incompatibility arises from the radically different methods used by science and religion to seek knowledge and assess truth claims. As a result, purported knowledge obtained from distinctively religious sources (holy books, church traditions, and so on) ends up being at odds with knowledge grounded in science.

Religious doctrines change, of course, as they are subjected over time to various pressures. Faith versus Fact includes a useful account of how they are often altered for reasons of mere expediency. One striking example is the decision by the Mormons (as recently as the 1970s) to admit blacks into its priesthood. This was rationalised as a new revelation from God, which raises an obvious question as to why God didn’t know from the start (and convey to his worshippers at an early time) that racial discrimination in the priesthood was wrong.

It is, of course, true that a system of religious beliefs can be modified in response to scientific discoveries. In principle, therefore, any direct logical contradictions between a specified religion and the discoveries of science can be removed as they arise and are identified. As I’ve elaborated elsewhere (e.g., in Freedom of Religion and the Secular State (2012)), religions have seemingly endless resources to avoid outright falsification. In the extreme, almost all of a religion’s stories and doctrines could gradually be reinterpreted as metaphors, moral exhortations, resonant but non-literal cultural myths, and the like, leaving nothing to contradict any facts uncovered by science.

In practice, though, there are usually problems when a particular religion adjusts. Depending on the circumstances, a process of theological adjustment can meet with internal resistance, splintering and mutual anathemas. It can lead to disillusionment and bitterness among the faithful. The theological system as a whole may eventually come to look very different from its original form; it may lose its original integrity and much of what once made it attractive.

All forms of Christianity – Catholic, Protestant, and otherwise – have had to respond to these practical problems when confronted by science and modernity.

Coyne emphasizes, I think correctly, that the all-too-common refusal by religious thinkers to accept anything as undercutting their claims has a downside for believability. To a neutral outsider, or even to an insider who is susceptible to theological doubts, persistent tactics to avoid falsification will appear suspiciously ad hoc.

To an outsider, or to anyone with doubts, those tactics will suggest that religious thinkers are not engaged in an honest search for truth. Rather, they are preserving their favoured belief systems through dogmatism and contrivance.

How science subverted religion

In principle, as Coyne also points out, the important differences in methodology between religion and science might (in a sense) not have mattered. That is, it could have turned out that the methods of religion, or at least those of the true religion, gave the same results as science. Why didn’t they?

Let’s explore this further. The following few paragraphs are my analysis, drawing on earlier publications, but I believe they’re consistent with Coyne’s approach. (Compare also Susan Haack’s non-accommodationist analysis in her 2007 book, Defending Science – within Reason.)

At the dawn of modern science in Europe – back in the sixteenth and seventeenth centuries – religious worldviews prevailed without serious competition. In such an environment, it should have been expected that honest and rigorous investigation of the natural world would confirm claims that were already found in the holy scriptures and church traditions. If the true religion’s founders had genuinely received knowledge from superior beings such as God or angels, the true religion should have been, in a sense, ahead of science.

There might, accordingly, have been a process through history by which claims about the world made by the true religion (presumably some variety of Christianity) were successively confirmed. The process might, for example, have shown that our planet is only six thousand years old (give or take a little), as implied by the biblical genealogies. It might have identified a global extinction event – just a few thousand years ago – resulting from a worldwide cataclysmic flood. Science could, of course, have added many new details over time, but not anything inconsistent with pre-existing knowledge from religious sources.

Unfortunately for the credibility of religious doctrine, nothing like this turned out to be the case. Instead, as more and more evidence was obtained about the world’s actual structures and causal mechanisms, earlier explanations of the appearances were superseded. As science advances historically, it increasingly reveals religion as premature in its attempts at understanding the world around us.

As a consequence, religion’s claims to intellectual authority have become less and less rationally believable. Science has done much to disenchant the world – once seen as full of spiritual beings and powers – and to expose the pretensions of priests, prophets, religious traditions, and holy books. It has provided an alternative, if incomplete and provisional, image of the world, and has rendered much of religion anomalous or irrelevant.

By now, the balance of evidence has turned decisively against any explanatory role for beings such as gods, ghosts, angels, and demons, and in favour of an atheistic philosophical naturalism. Regardless what other factors were involved, the consolidation and success of science played a crucial role in this. In short, science has shown a historical, psychological, and rational tendency to undermine religious faith.

Not only the sciences!

I need to be add that the damage to religion’s authority has come not only from the sciences, narrowly construed, such as evolutionary biology. It has also come from work in what we usually regard as the humanities. Christianity and other theistic religions have especially been challenged by the efforts of historians, archaeologists, and academic biblical scholars.

Those efforts have cast doubt on the provenance and reliability of the holy books. They have implied that many key events in religious accounts of history never took place, and they’ve left much traditional theology in ruins. In the upshot, the sciences have undermined religion in recent centuries – but so have the humanities.

Coyne would not tend to express it that way, since he favours a concept of “science broadly construed”. He elaborates this as: “the same combination of doubt, reason, and empirical testing used by professional scientists.” On his approach, history (at least in its less speculative modes) and archaeology are among the branches of “science” that have refuted many traditional religious claims with empirical content.

But what is science? Like most contemporary scientists and philosophers, Coyne emphasizes that there is no single process that constitutes “the scientific method”. Hypothetico-deductive reasoning is, admittedly, very important to science. That is, scientists frequently make conjectures (or propose hypotheses) about unseen causal mechanisms, deduce what further observations could be expected if their hypotheses are true, then test to see what is actually observed. However, the process can be untidy. For example, much systematic observation may be needed before meaningful hypotheses can be developed. The precise nature and role of conjecture and testing will vary considerably among scientific fields.

Likewise, experiments are important to science, but not to all of its disciplines and sub-disciplines. Fortunately, experiments are not the only way to test hypotheses (for example, we can sometimes search for traces of past events). Quantification is also important… but not always.

However, Coyne says, a combination of reason, logic and observation will always be involved in scientific investigation. Importantly, some kind of testing, whether by experiment or observation, is important to filter out non-viable hypotheses.

If we take this sort of flexible and realistic approach to the nature of science, the line between the sciences and the humanities becomes blurred. Though they tend to be less mathematical and experimental, for example, and are more likely to involve mastery of languages and other human systems of meaning, the humanities can also be “scientific” in a broad way. (From another viewpoint, of course, the modern-day sciences, and to some extent the humanities, can be seen as branches from the tree of Greek philosophy.)

It follows that I don’t terribly mind Coyne’s expansive understanding of science. If the English language eventually evolves in the direction of employing his construal, nothing serious is lost. In that case, we might need some new terminology – “the cultural sciences” anyone? – but that seems fairly innocuous. We already talk about “the social sciences” and “political science”.

For now, I prefer to avoid confusion by saying that the sciences and humanities are continuous with each other, forming a unity of knowledge. With that terminological point under our belts, we can then state that both the sciences and the humanities have undermined religion during the modern era. I expect they’ll go on doing so.

A valuable contribution

In challenging the undeserved hegemony of religion/science accommodationism, Coyne has written a book that is notably erudite without being dauntingly technical. The style is clear, and the arguments should be understandable and persuasive to a general audience. The tone is rather moderate and thoughtful, though opponents will inevitably cast it as far more polemical and “strident” than it really is. This seems to be the fate of any popular book, no matter how mild-mannered, that is critical of religion.

Coyne displays a light touch, even while drawing on his deep involvement in scientific practice (not to mention a rather deep immersion in the history and detail of Christian theology). He writes, in fact, with such seeming simplicity that it can sometimes be a jolt to recognize that he’s making subtle philosophical, theological, and scientific points.

In that sense, Faith versus Fact testifies to a worthwhile literary ideal. If an author works at it hard enough, even difficult concepts and arguments can usually be made digestible. It won’t work out in every case, but this is one where it does. That’s all the more reason why Faith versus Fact merits a wide readership. It’s a valuable, accessible contribution to a vital debate.

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

The Left’s Defection from Progress

Note: This is a slightly abridged (but otherwise largely warts and all) version of an article that I had published in Quadrant magazine in April 1999. It has not previously been published online (except that I am cross-posting on my own blog, Metamagician and the Hellfire Club). While my views have developed somewhat in the interim, there may be some advantage in republishing it for a new audience, especially at a time when there is much discussion of a “regressive left”.

I.

In a recent mini-review of David Stove’s Anything Goes: Origins of Scientific Irrationalism (originally published in 1982 as Popper and After), Diane Carlyle and Nick Walker make a casual reference to Stove’s “reactionary polemic”. By contrast, they refer to the philosophies of science that Stove attacks as “progressive notions of culture-based scientific knowledge”. To say the least, this appears tendentious.

To be fair, Carlyle and Walker end up saying some favourable things about Stove’s book. What is nonetheless alarming about their review is that it evidences just how easy it has become to write as if scientific realism were inherently “reactionary” and the more or less relativist views of scientific knowledge that predominate among social scientists and humanities scholars were “progressive”.

The words “reactionary” and “progressive” usually attach themselves to political and social movements, some kind of traditionalist or conservative backlash versus an attempt to advance political liberties or social equality. Perhaps Carlyle and Walker had another sense in mind, but the connotations of their words are pretty inescapable. Moreover, they would know as well as I do that there is now a prevalent equation within the social sciences and humanities of relativist conceptions of truth and reality with left-wing social critique, and of scientific realism with the political right. Carlyle and Walker wrote their piece against that background. But where does it leave those of us who retain at least a temperamental attachment to the left, however nebulous that concept is becoming, while remaining committed to scientific realism? To adapt a phrase from Christina Hoff Sommers, we are entitled to ask about who has been stealing socially liberal thought in general.

Is the life of reason and liberty (intellectual and otherwise) that some people currently enjoy in some countries no more than an historical anomaly, a short-lived bubble that will soon burst? It may well appear so. Observe the dreadful credulity of the general public in relation to mysticism, magic and pseudoscience, and the same public’s preponderant ignorance of genuine science. Factor in the lowbrow popularity of religious fundamentalism and the anti-scientific rantings of highbrow conservatives such as Bryan Appleyard. Yet the sharpest goad to despair is the appearance that what passes for the intellectual and artistic left has now repudiated the Enlightenment project of conjoined scientific and social progress.

Many theorists in the social sciences and humanities appear obsessed with dismantling the entirety of post-Enlightenment political, philosophical and scientific thought. This is imagined to be a progressive act, desirable to promote the various social, cultural and other causes that have become politically urgent in recent decades, particularly those associated with sex, race, and the aftermath of colonialism. The positions on these latter issues taken by university-based theorists give them a claim to belong to, if not actually constitute, the “academic left”, and I’ll refer to them with this shorthand expression.

There is, however, nothing inherently left-wing about wishing to sweep away our Enlightenment legacy. Nor is a commitment to scientific inquiry and hard philosophical analysis inconsistent with socially liberal views. Abandonment of the project of rational inquiry, with its cross-checking of knowledge in different fields, merely opens the door to the worst kind of politics that the historical left could imagine, for the alternative is that “truth” be determined by whoever, at particular times and places, possesses sufficient political or rhetorical power to decide what beliefs are orthodox. The rationality of our society is at stake, but so is the fate of the left itself, if it is so foolish as to abandon the standards of reason for something more like a brute contest for power.

It is difficult to know where to start in criticising the academic left’s contribution to our society’s anti-rationalist drift. The approaches I am gesturing towards are diverse among themselves, as well as being professed in the universities side by side with more traditional methods of analysing society and culture. There is considerable useful dialogue among all these approaches, and it can be difficult obtaining an accurate idea of specific influences within the general intellectual milieu.

However, amidst all the intellectual currents and cross-currents, it is possible to find something of a common element in the thesis or assumption (sometimes one, sometimes the other) that reality, or our knowledge of it, is “socially constructed”. There are many things this might mean, and I explain below why I do not quarrel with them all.

In the extreme, however, our conceptions of reality, truth and knowledge are relativised, leading to absurd doctrines, such as the repudiation of deductive logic or the denial of a mind-independent world. Symptomatic of the approach I am condemning is a subordination of the intellectual quest for knowledge and understanding to political and social advocacy. Some writers are prepared to misrepresent mathematical and scientific findings for the purposes of point scoring or intellectual play, or the simple pleasure of ego-strutting. All this is antithetical to Enlightenment values, but so much – it might be said – for the Enlightenment.

II.

The notion that reality is socially constructed would be attractive and defensible if it were restricted to a thesis about the considerable historical contingency of any culture’s social practices and mores, and its systems of belief, understanding and evaluation. These are, indeed, shaped partly by the way they co-evolve and “fit” with each other, and by the culture’s underlying economic and other material circumstances.

The body of beliefs available to anyone will be constrained by the circumstances of her culture, including its attitude to free inquiry, the concepts it has already built up for understanding the world, and its available technologies for the gathering of data. Though Stove is surely correct to emphasise that the accumulation of empirical knowledge since the 17th century has been genuine, the directions taken by science have been influenced by pre-existing values and beliefs. Meanwhile, social practices, metaphysical and ethical (rather than empirical) beliefs, the methods by which society is organised and by which human beings understand their experience are none of them determined in any simple, direct or uniform way by human “nature” or biology, or by transcendental events.

So far, so good – but none of this is to suggest that all of these categories should or can be treated in exactly the same way. Take the domain of metaphysical questions. Philosophers working in metaphysics are concerned to understand such fundamentals as space, time, causation, the kinds of substances that ultimately exist, the nature of consciousness and the self. The answers cannot simply be “read off” our access to empirical data or our most fundamental scientific theories, or some body of transcendental knowledge. Nonetheless, I am content to assume that all these questions, however intractable we find them, have correct answers.

The case of ethical disagreement may be very different, and I discuss it in more detail below. It may be that widespread and deep ethical disagreement actually evidences the correctness of a particular metaphysical (and meta-ethical) theory – that there are no objectively existing properties of moral good and evil. Yet, to the extent that they depend upon empirical beliefs about the consequences of human conduct, practical moral judgements may often be reconcilable. Your attitude to the rights of homosexuals will differ from mine if yours is based on a belief that homosexual acts cause earthquakes.

Again, the social practices of historical societies may turn out to be constrained by our biology in a way that is not true of the ultimate answers to questions of metaphysics. All these are areas where human behaviour and belief may be shaped by material circumstances and the way they fit with each other, and relatively unconstrained by empirical knowledge. But, to repeat, they are not all the same.

Where this appears to lead us is that, for complicated reasons and in awkward ways, there is much about the practices and beliefs of different cultures that is contingent on history. In particular, the way institutions are built up around experience is more or less historically contingent, dependent largely upon economic and environmental circumstances and on earlier or co-evolving layers of political and social structures. Much of our activity as human beings in the realms of understanding, organising, valuing and responding to experience can reasonably be described as “socially constructed”, and it will often make perfectly good sense to refer to social practices, categories, concepts and beliefs as “social constructions”.

Yet this modest insight cries out for clear intellectual distinctions and detailed application to particular situations, with conscientious linkages to empirical data. It cannot provide a short-cut to moral perspicuity or sound policy formulation. Nor is it inconsistent with a belief in the actual existence of law-governed events in the empirical world, which can be the subject of objective scientific theory and accumulating knowledge.

III.

As Antony Flew once expressed it, what is socially constructed is not reality itself but merely “reality”: the beliefs, meanings and values available within a culture.

Thus, none of what I’ve described so far amounts to “social constructionism” in a pure or philosophical sense, since this would require, in effect, that we never have any knowledge. It would require a thesis that all beliefs are so deeply permeated by socially specific ideas that they never transcend their social conditions of production to the extent of being about objective reality. To take this a step further, even the truth about physical nature would be relative to social institutions – relativism applies all the way down.

Two important points need to be made here. First, even without such a strong concept of socially constructed knowledge, social scientists and humanities scholars have considerable room to pursue research programs aimed at exploring the historically contingent nature of social institutions. In the next section, I argue that this applies quintessentially to socially accepted moral beliefs.

Second, however, there is a question as to why anyone would insist upon the thesis that the nature of reality is somehow relative to social beliefs all the way down, that there is no point at which we ever hit a bedrock of truth and falsity about anything. It is notorious that intellectuals who use such language sometimes retreat, when challenged, to a far more modest or equivocal kind of position.

Certainly, there is no need for anyone’s political or social aims to lead them to deny the mind-independent existence of physical nature, or to suggest that the truth about it is, in an ultimate sense, relative to social beliefs or subjective to particular observers. Nonetheless, many left-wing intellectuals freely express a view in which reality, not “reality”, is a mere social construction.

IV.

If social construction theory is to have any significant practical bite, then it has to assert that moral beliefs are part of what is socially constructed. I wish to explore this issue through some more fundamental considerations about ethics.

It is well-documented that there are dramatic contrasts between different societies’ practical beliefs about what is right and wrong, so much so that the philosopher J.L. Mackie said that these “make it difficult to treat those judgements as apprehensions of objective truths.” As Mackie develops the argument, it is not part of some general theory that “the truth is relative”, but involves a careful attempt to show that the diversity of moral beliefs is not analogous to the usual disagreements about the nature of the physical world.

Along with other arguments put by philosophers in Hume’s radical empiricist tradition, Mackie’s appeal to cultural diversity may persuade us that there are no objective moral truths. Indeed, it seems to me that there are only two positions here that are intellectually viable. The first is that Mackie is simply correct. This idea might seem to lead to cultural relativism about morality, but things are not always what they seem.

The second viable position is that there are objective moral truths, but they take the form of principles of an extremely broad nature, broad enough to help shape – rather than being shaped by – a diverse range of social practices in different environmental, economic and other circumstances.

If this is so, particular social practices and practical moral beliefs have some ultimate relationship to fundamental moral principles, but there can be enormous “slippage” between the two, depending on the range of circumstances confronting different human societies. Moreover, during times of rapid change such as industrialised societies have experienced in the last three centuries – and especially the last several decades – social practices and practical moral beliefs might tend to be frozen in place, even though they have become untenable. Conversely, there might be more wisdom, or at least rationality, than is apparent to most Westerners in the practices and moral beliefs of traditional societies. All societies, however, might have practical moral beliefs that are incorrect because of lack of empirical knowledge about the consequences of human conduct.

Taken with my earlier, more general, comments about various aspects of social practices and culturally-accepted “reality”, this approach gives socially liberal thinkers much of what they want. It tends to justify those who would test and criticise the practices and moral beliefs of Western nations while defending the rationality and sophistication of people from colonised cultures.

V.

The academic left’s current hostility to science and the Enlightenment project may have its origins in a general feeling, brought on by the twentieth century’s racial and ideological atrocities, that the Enlightenment has failed. Many intellectuals have come to see science as complicit in terror, oppression and mass killing, rather than as an inspiration for social progress.

The left’s hostility has surely been intensified by a quite specific fear that the reductive study of human biology will cross a bridge from the empirical into the normative realm, where it may start to dictate the political and social agenda in ways that can aptly be described as reactionary. This, at least, is the inference I draw from left-wing intellectuals’ evident detestation of human sociobiology or evolutionary psychology.

The fear may be that dubious research in areas such as evolutionary psychology and/or cognitive neuroscience will be used to rationalise sexist, racist or other illiberal positions. More radically, it may be feared that genuine knowledge of a politically unpalatable or otherwise harmful kind will emerge from these areas. Are such fears justified? To dismiss them lightly would be irresponsible and naive. I can do no more than place them in perspective. The relationship between the social sciences and humanities, on the one hand, and the “hard” end of psychological research, on the other, is one of the most important issues to be tackled by intellectuals in all fields – the physical sciences, social sciences and humanities.

One important biological lesson we have learned is that human beings are not, in any reputable sense, divided into “races”. As an empirical fact of evolutionary history and genetic comparison, we are all so alike that superficial characteristics such as skin or hair colour signify nothing about our moral or intellectual worth, or about the character of our inner experience. Yet, what if it had turned out otherwise? It is understandable if people are frightened by our ability to research such issues. At the same time, the alternative is to suppress rational inquiry in some areas, leaving questions of orthodoxy to however wins the naked contest for power. This is neither rational nor safe.

What implications could scientific knowledge about ourselves have for moral conduct or social policy? No number of factual statements about human nature, by themselves, can ever entail statements that amount to moral knowledge, as Hume demonstrated. What is required is an ethical theory, persuasive on other grounds, that already links “is” and “ought”. This might be found, for example, in a definition of moral action in terms of human flourishing, though it is not clear why we should, as individuals, be concerned about something as abstract as that – why not merely the flourishing of ourselves or our particular loved ones?.

One comfort is that, even if we had a plausible set of empirical and meta-ethical gadgets to connect what we know of human nature to high-level questions about social policy, we would discover significant slippage between levels. Nature does not contradict itself, and no findings from a field such as evolutionary psychology could be inconsistent with the observed facts of cultural diversity. If reductive explanations of human nature became available in more detail, these must turn out to be compatible with the existence of the vast spectrum of viable cultures that human beings have created so far. And there is no reason to believe that a lesser variety of cultures will be workable in the material circumstances of a high-technology future.

The dark side of evolutionary psychology includes, among other things, some scary-looking claims about the reproductive and sociopolitical behaviour of the respective sexes. True, no one seriously asserts that sexual conduct in human societies and the respective roles of men and women within families and extra-familial hierarchies are specified by our genes in a direct or detailed fashion. What, however, are we to make of the controversial analyses of male and female reproductive “strategies” that have been popularised by several writers in the 1990s? Perhaps the best-known exposition is that of Matt Ridley in The Red Queen: Sex and the Evolution of Human Nature (1993). Such accounts offer evidence and argument that men are genetically hardwired to be highly polygamous or promiscuous, while women are similarly programmed to be imperfectly monogamous, as well as sexually deceitful.

In responding to this, first, I am in favour of scrutinising the evidence for such claims very carefully, since they can so readily be adapted to support worn-out stereotypes about the roles of the sexes. That, however, is a reason to show scientific and philosophical rigour, not to accept strong social constructionism about science. Secondly, even if findings similar to those synthesised by Ridley turned out to be correct, the social consequences are by no means apparent. Mere biological facts cannot tell us in some absolute way what are the correct sexual mores for a human society.

To take this a step further, theories about reproductive strategies suggest that there are in-built conflicts between the interests of men and women, and of higher and lower status men, which will inevitably need to be moderated by social compromise, not necessarily in the same way by different cultures. If all this were accepted for the sake of argument, it might destroy a precious notion about ourselves: that there is a simple way for relations between the sexes to be harmonious. On the other hand, it would seem to support rather than refute what might be considered a “progressive” notion: that no one society, certainly not our own, has the absolutely final answer to questions about sexual morality.

Although evolutionary psychology and cognitive neuroscience are potential minefields, it is irrational to pretend that they are incapable of discovering objective knowledge. Fortunately, such knowledge will surely include insight into the slippage between our genetic similarity and the diversity of forms taken by viable cultures. The commonality of human nature will be at a level that is consistent with the (substantial) historical contingency of social practices and of many areas of understanding and evaluative belief. The effect on social policy is likely to be limited, though we may become more charitable about what moral requirements are reasonable for the kinds of creatures that we are.

I should add that evolutionary psychology and cognitive neuroscience are not about to put the humanities, in particular, out of business. There are good reasons why the natural sciences cannot provide a substitute for humanistic explanation, even if we obtain a far deeper understanding of our own genetic and neurophysiological make-up. This is partly because reductive science is ill-equipped to deal with the particularity of complex events, partly because causal explanation may not be all that we want, anyway, when we try to interpret and clarify human experience.

VI.

Either there are no objective moral truths or they are of an extremely general kind. Should we, therefore, become cultural relativists?

Over a quarter of a century ago, Bernard Williams made the sharp comment that cultural relativism is “possibly the most absurd view to have been advanced even in moral philosophy”” To get this clear, Williams was criticising a cluster of beliefs that has a great attraction for left-wing academics and many others who preach inter-cultural tolerance: first, that what is “right” means what is right for a particular culture; second, that what is right for a particular culture refers to what is functionally valuable for it; and third, that it is “therefore” wrong for one culture to interfere with the organisation or values of another.

As Williams pointed out, these propositions are internally inconsistent. Not only does the third not follow from the others; it cannot be asserted while the other two are maintained. After all, it may be functionally valuable to culture A (and hence “right” within that culture) for it to develop institutions for imposing its will on culture B. These may include armadas and armies, colonising expeditions, institutionalised intolerance, and aggressively proselytising religions. In fact, nothing positive in the way of moral beliefs, political programs or social policy can ever be derived merely from a theory of cultural relativism.

That does not mean that there are no implications at all from the insight that social practices and beliefs are, to a large degree, contingent on history and circumstance. Depending upon how we elaborate this insight, we may have good reason to suspect that another culture’s odd-looking ways of doing things are more justifiable against universal principles of moral value than is readily apparent. In that case, we may also take the view that the details of how our own society, or an element of it, goes about things are open to challenge as to how far they are (or remain?) justifiable against such universal principles.

If, on the other hand, we simply reject the existence of any objective moral truths – which I have stated to be a philosophically viable position – we will have a more difficult time explaining why we are active in pursuing social change. Certainly, we will not be able to appeal to objectively applicable principles to justify our activity. All the same, we may be able to make positive commitments to ideas such as freedom, equality or benevolence that we find less arbitrary and more psychologically satisfying than mere acquiescence in “the way they do things around here”. In no case, however, can we intellectually justify a course of political and social activism without more general principles or commitments to supplement the bare insight that, in various complicated ways, social beliefs and practices are largely contingent.

VII.

An example of an attempt to short-circuit the kind of hard thinking about moral foundations required to deal with contentious issues is Martin F. Katz’s well-known article, “After the Deconstruction: Law in the Age of Post-Structuralism”. Katz is a jurisprudential theorist who is committed to a quite extreme form of relativism about empirical knowledge. In particular, his article explicitly assigns the findings of physical science the same status as the critical interpretations of literary works.

Towards the end of “After the Deconstruction”, Katz uses the abortion debate as an example of how what he calls “deconstructionism” or the “deconstructionist analysis” can clarify and arbitrate social conflict. He begins by stating the debate much as it might be seen by its antagonists:

One side of the debate holds that abortion is wrong because it involves the murder of an unborn baby. The other side of the debate sees abortion as an issue of self-determination; the woman’s right to choose what she does to her body. How do we measure which of these “rights” should take priority?

In order to avoid any sense of evasion, I’ll state clearly that the second of these positions, the “pro-choice” position, is closer to my own. However, either position has more going for it in terms of rationality than what Katz actually advocates.

This, however, is not how Katz proposes to solve the problem of abortion. He begins by stating that “deconstructionism” recommends that we “resist the temptation to weigh the legitimacy of . . . these competing claims.” Instead, we should consider the different “subjugations” supposedly instigated by the pro-life and pro-choice positions. The pro-life position is condemned because it denies women the choice of what role they wish to take in society, while the pro-choice position is apparently praised (though even this is not entirely clear) for shifting the decision about whether and when to have children directly to women.

The trouble with this is that it prematurely forecloses on the metaphysical and ethical positions at stake, leaving everything to be solved in terms of power relations. However, if we believe that a foetus (say at a particular age) is a person in some sense that entails moral regard, or a being that possesses a human soul, then there are moral consequences. Such beliefs, together with some plausible assumptions about our moral principles or commitments, entail that we should accept that aborting the foetus is an immoral act. The fact that banning the abortion may reduce the political power of the woman concerned, or of women generally, over against that of men will seem to have little moral bite, unless we adopt a very deep principle of group political equality. That would require ethical argument of an intensity which Katz never attempts.

If we take it that the foetus is not a person in the relevant sense, we may be far more ready to solve the problem (and to advocate an assignment of “rights”) on the basis of utilitarian, or even libertarian, principles. By contrast, the style of “deconstructionist” thought advocated by Katz threatens to push rational analysis aside altogether, relying on untheorised hunches or feelings about how we wish power to be distributed in our society. This approach can justifiably be condemned as irrational. At the same time, the statements that Katz makes about the political consequences for men or women of banning or legalising abortion are so trite that it is difficult to imagine how anyone not already beguiled by an ideology could think that merely stating them could solve the problem.

VIII.

In the example of Katz’s article, as in the general argument I have put, the insight that much in our own society’s practices and moral beliefs is “socially constructed” can do only a modest amount of intellectual work. We may have good reason to question the way they do things around here, to subject it to deeper analysis. We may also have good reason to believe that the “odd” ways they do things in other cultures make more sense than is immediately apparent to the culture-bound Western mind. All very well. None of this, however, can undermine the results of systematic empirical inquiry. Nor can it save us from the effort of grappling with inescapable metaphysical and ethical questions, just as we had to do before the deconstruction.

[My Amazon author page]

The monsters of Jurassic World

Russell Blackford, University of Newcastle

Philosophers and blockbusters

There are at least three reasons why philosophers take an interest in hugely popular cultural products such Hollywood blockbuster action movies. First is a kind of (non-objectionable) opportunism. At least some of these movies, etc., grapple with philosophical issues: usually moral issues, but sometimes metaphysical and epistemological ones, such as those relating to personal identity or to the problems of appearance versus reality. If these are brought to public attention in very popular forms, it provides an opportunity for philosophers to discuss – and perhaps clarify – them. There’s nothing wrong with that: the exercise may be enjoyable, and even educational, all round, though the various discussions that follow may not tell us much about the actual merits of the movie (book, video game, or whatever) that acted as the springboard.

Second, there might be more to the exercise than mere opportunism. If certain moral, metaphysical, and other philosophical ideas are being popularised, philosophers may well be qualified to discuss the merits of those ideas, whether to support them, to counter them, or to say something about them that is more nuanced and complex. Here, the creators of a movie such as Jurassic World are being treated as participants in an ongoing philosophical conversation. The movie is not used merely as a springboard; rather, its particular take on the issues is sought out, revealed, and perhaps endorsed or disputed (or some combination of these).

Third, we may be interested, in a more general way, in how artworks and cultural products engage with philosophical ideas. In that sense, our interests as philosophers may overlap with those of literary and cultural theorists, although we bring different training to the inquiry. For example, I am interested in the way Jurassic World conveys attitudes to technology, not merely as a springboard to discuss those attitudes, and not merely to discuss those particular attitudes on their merits – I am also interested in it as an example of how cultural products generally, movies in particular, and science fiction blockbusters even more specifically, represent technology. Perhaps there is something of general interest to say about this, and a new movie with such popular appeal might tend to confirm or undermine what we think we know.

In practice, we may be interested in all three of these aspects and perhaps others that don’t immediately come to mind. If I review Jurassic World, say, as I did briefly on my personal blog, I will tend to run these levels together to an extent. Still, philosophers might have something to say that is a bit different from what you’d expect from a conventional film critic (that said, philosophers often have rather broad educational backgrounds, including in cultural criticism; conversely, I’m sure that many film critics have studied philosophy to some extent or other – we don’t live in entirely separate intellectual silos).

The Jurassic formula

The Jurassic Park franchise has achieved immense commercial success, though the second and third movies were never as popular as the original Jurassic Park in 1993. Jurassic World is breaking box office records on a daily basis, most recently, as I write, the record for box office takings in the US domestic market in its first seven days of release. Something has clicked with the public, not only in the US but throughout the world.

Part of that has to do with the fact that these movies are just plain fun – scary enough to make kids, or even adults, jump out of their seats, but not too confrontational to rule them out as family entertainment. They are expertly directed, employ impressive special effects (brought up to date in the latest movie – alas, the 1993 effects are looking a bit dated by now), and use charismatic actors such as Chris Pratt.

There is also a morality play element, often highlighting the characters’ attitudes to technology. Many characters are killed swiftly – they are pretty much treated as dino fodder – but elaborate, and often humiliating, deaths are given to the characters who appear most venal or blinded by pride. (Perhaps the most humiliating death of all is given to the lawyer, Donald Gennaro, in the first movie.) Other characters are shown as having moral weaknesses, but they are punished (by their terrifying encounters with the rampaging dinosaurs) and ultimately redeemed. All of this is no doubt emotionally satisfying to a popular audience.

Thus, the dinosaurs are not portrayed simply as “bad guys” or monsters. To a large extent, they are more like instruments of fate, or something like karma, inflicting rewards and punishments. It is fair to say that the real monsters of Jurassic World and its predecessors are the human beings who exploit genetic technology in ways that are portrayed to us as greedy, vain, and irresponsible.

Attitudes to technology

The genetic technology used to reconstruct dinosaurs from fossilised DNA is fairly consistently portrayed as evil – the whole exercise in recreating the dinosaurs from ancient genetic material has something monstrous about it, or so the movies would lead us to believe. But there is an ambiguity here, a certain instability of attitude, because the dinosaurs themselves are not only dangerous and terrifying. Some of them are relatively harmless, and they are shown variously as fun, exciting, alluring, even sublime. This kind of allure associated with products of technology is almost inevitable in feature movies with a technophobic element (a point that I owe to the critic J.P. Telotte). After all, we, as moviegoers, are much like the audience of the Jurassic World theme park: we expect to be impressed and awed by the dinosaurs, not just scared by them.

This is a common feature in Hollywood’s science-fiction blockbusters. Even in the movies of the Terminator franchise, the original Terminator – a futuristic killing machine in human form, portrayed by Arnold Schwarzenegger – has its alluring aspects. A similar machine, also portrayed by Schwarzenegger, became a hero in the second movie of the franchise, Terminator 2: Judgment Day (1991). Terminators are scary and nasty, as we are shown, but they are cool.

We can see this element handled with a certain knowingness in Jurassic World, where the scary new dinosaur, Indominus rex, is not an attempt at recreating a beast from the Mesozoic Era, but has been genetically engineered as a theme park attraction that will be even more impressive than the likes of Tyrannosaurus rex. In the event, Indominus rex is depicted as an almost demonic creature, and it is notable for killing other dinosaurs for sport (recalling perhaps, the human big game hunters of the second movie in the series). At the same time, we are reminded that all of the dinosaurs created by advanced genetic science are, in more ways than one, unnatural. Not only are they products of human design and creation: they have been brought about in ways that make them imperfect (in some ways more dangerous) copies of the original animals that they mimic.

Still, the Indominus rex is even more – perhaps triply? – unnatural, with its deliberate “improvements”. To rub in the point, its enhanced abilities include extraordinary levels of stealth and cunning, as well as the cruelty that was asked for in its specifications.

Conclusion

Hollywood science-fiction blockbusters can often seem like works of anti-science fiction, expressing distrust of science and technology. Indeed, this can be seen in much science fiction in other media, going back to Mary Shelley’s Frankenstein, written nearly two hundred years ago.

But technology is also seen as impressive and attractive – and perhaps as simply inevitable – whatever dangers it brings to societies and individuals, and however much it may be misused in the service of vices such as greed and pride. This ambivalence continues in much contemporary science fiction with cyberpunk or dystopian emphases. Themes of danger, irresponsibility, and dehumanization are prevalent, but the result is often, for better or worse, also shown as something cool (and this may be exploited in publicity and merchandising).

The technophobic/technophilic ambivalence is especially prominent in many Hollywood productions, where moral lessons – valuable or otherwise – play a secondary role to SFX magic and sheer spectacle.

The Conversation

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

Philosophy versus science versus politics

Russell Blackford, University of Newcastle

We might hope that good arguments will eventually drive out bad arguments – in what Timothy Williamson calls “a reverse analogue of Gresham’s Law” – and we might want (almost?) complete freedom for ideas and arguments, rather than suppressing potentially valuable ones.

Unfortunately, it takes honesty and effort before the good arguments can defeat the bad.

Williamson on philosophy and science

In a field such as philosophy, the reverse Gresham’s Law analogue may be too optimistic, as Williamson suggests.

Williamson points out that very often a philosopher profoundly wants one answer rather than another to be the right one. He or she may thus be predisposed to accept certain arguments and to reject others. If the level of obscurity is high in a particular field of discussion (as will almost always be the case with philosophical controversies), “wishful thinking may be more powerful than the ability to distinguish good arguments from bad”. So much so “that convergence in the evaluation of arguments never occurs.”

Williamson has a compelling point. Part of the seemingly intractable dissensus in philosophy comes from motivated reasoning about the issues. There is a potential for intellectual disaster in the combination of: 1) strong preferences for certain conclusions; and 2) very broad latitude for disagreement about the evidence and the arguments.

This helps to explain why many philosophical disagreements appear to be, for practical purposes, intractable. In such cases, rival philosophical theories may become increasingly sophisticated, and yet none can obtain a conclusive victory over its rivals. As a result, philosophical investigation does not converge on robust findings. A sort of progress may result, but not in the same way as in the natural sciences.

By way of comparison, Williamson imagines a difficult scientific dispute. Two rival theories may have committed proponents “who have invested much time, energy, and emotion”, and only high-order experimental skills can decide which theory is correct. If the standards of the relevant scientific community are high enough in terms of conscientiousness and accuracy, the truth will eventually prevail. But if the scientific community is just a bit more tolerant of what Williamson calls “sloppiness and rhetorical obfuscation” both rival theories may survive indefinitely, with neither ever being decisively refuted.

All that’s required for things to go wrong is a bit less care in protecting samples from impurity, a bit more preparedness to accept ad hoc hypotheses, a bit more swiftness in dismissing opposing arguments as question-begging. “A small difference in how carefully standards are applied can make a large difference between eventual convergence and eventual divergence”, he says.

For Williamson, the moral of the story is that philosophy has more chance of making progress if philosophers are rigorous and more demanding of themselves, and if they are open to being wrong. Much philosophical work, he thinks, is shoddy, vague, impatient and careless in checking details.

It may be protected from refutation by rhetorical techniques such as “pretentiousness, allusiveness, gnomic concision, or winning informality.” Williamson prefers philosophy that is patient, precise, rigorously argued, and carefully explained, even at the risk of seeming boring or pedantic. As he puts it, “Pedantry is a fault on the right side.”

An aspiration for philosophy

I think there’s something in this – an element of truth in Williamson’s analysis. Admittedly, the kind of work that he is advocating may not be easily accessible to the general educated public (although any difficulty of style would be from the real complexities of the subject matter, rather than an attempt to impress with a dazzling performance).

It’s also possible that there are other and deeper problems for philosophy that hinder its progress. Nonetheless, the discipline is marked by emotional investments in many proposed conclusions, together with characteristics that make it easy for emotionally motivated reasoners to evade refutation.

If we want to make more obvious progress in philosophy, we had better try to counter these factors. At a minimum that will involve openness to being wrong and to changing our minds. It will mean avoiding bluster, rhetorical zingers, general sloppiness and the protection that comes from making vague or equivocal claims.

This can all be difficult. Even with the best of intentions, we will often fail to meet the highest available standards, but we can at least try to do so. Imperfection is inevitable, but we needn’t indulge our urges to protect emotionally favoured theories. We can aspire to something better.

Politics, intellectual honesty, and discussion in the public square

There is one obvious area of discussion in modern democracies where the intellectual rigour commended by Williamson – which he sees as prevalent in the sciences and as a worthy aspiration for philosophers – is given almost no credence. I’m referring to the claims made by rivals in democratic party politics.

Here, the aim is usually to survive and prevail at all costs. Ideas are protected through sloppiness, rhetoric and even outright distortion of the facts, and opponents are viewed as enemies to be defeated. Purity of adherence to a “party line” is frequently enforced, and internal dissenters are treated as heretics. All too often, they are thought to deserve the most personal, microscopic and embarrassing scrutiny. It may culminate in ostracism, orchestrated smearing and other punishments.

This is clearly not a recipe for finding the truth. Whatever failures of intellectual dishonesty are shown by philosophers, they are usually very subtle compared to those exhibited during party political struggles.

I doubt that we can greatly change the nature of party political debate, though we can certainly call for more intellectual honesty and for less of the distortion that comes from political Manichaeism. Even identifying the prevalence of political Manichaeism – and making it more widely known – is a worthwhile start.

Greatly changing the nature of party political debate may be difficult because emotions run high. Losing may be seen as socially catastrophic, and comprehensive worldviews are engaged. By its very nature, this sort of debate is aimed at obtaining power rather than inquiring into the truth. Political rhetoric appeals to the hearts and minds – but especially the hearts – of mass electorates. It has an inevitable tendency in the direction of propaganda.

To some extent, we are forced to accept robust, even brutal, debate over party political issues. When we do so, however, we can at least recognise it as exceptional, rather than as a model for debate in other areas. It should not become the template for more general cultural and moral discussions – or even broadly political discussions – and we are right to protest when we see it becoming so.

It’s an ugly spectacle when party politics proceeds with each side attempting to claim scalps – demonizing opponents, attempting to embarrass them or to present them as somehow disgraced, forcing them, if at all possible, to resign from office – rather than seeking the truth.

It’s an even more worrying spectacle when wider debate in the public square is carried on in much the same way. We should be dissatisfied when journalists, literary and cultural critics, supposedly serious bloggers, and academics – and other contributors to the public culture who are not party politicians – mimic party politicians’ standards.

If anything, our politicians need to be nudged toward better standards. But even if that is unrealistic, we don’t have to adopt them as role models. Instead, we can seek standards of care, patience, rigour and honesty. We can avoid engaging in the daily pile-ons, ostracisms, smear campaigns, and all the other tactics that amount to taking scalps rather than honestly discussing issues and examining arguments. We can, furthermore, look for ways to support individuals who have been isolated and unfairly targeted.

High standards

At election time, we may have to vote for one political party or another, or else not vote (formally) at all. But in the rest of our lives, we can often suspend judgement on genuinely difficult issues. We can take intellectual opponents’ arguments seriously, and we can develop views that don’t align with any of the various off-the-shelf ones currently available.

More plainly, we can think for ourselves on matters of philosophical, moral, cultural and political controversy. Importantly, we can encourage others to do the same, rather than trying to punish them for disagreeing with us.

Party politicians are necessary, or at least they are better than any obvious alternatives (hereditary despots, anyone?). But they should never be regarded as role models for the rest of us.

Timothy Williamson asks for extremely high intellectual standards that may not be fully achievable even within philosophy, let alone in broader public discussion. We can, however, aspire to something like them, rather than indulging in the worst – in tribal and Manichaean – alternatives.

The Conversation

Russell Blackford is Conjoint Lecturer in Philosophy at University of Newcastle

This article was originally published on The Conversation. Read the original article.

Interstellar, Science and Fantasy

Although I like science fiction, I did not see Interstellar until fairly recently—although time is such a subjective sort of thing. One reason I decided to see it is because some have claimed that the movie should be shown in science classes, presumably to help the kids learn science. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regards to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not

seen it, you might wish to hold off reading this essay.

While there have been numerous attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak has inquired of Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no.  This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”

Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see.  It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence.  The four points of the compass be logic, knowledge, wisdom, and the unknown.  Some do bow in that final direction.  Others advance upon it.  To bow before the one is to lose sight of the three.  I may submit to the unknown, but never to the unknowable”

In Lord of Light, the Rakshasa play the role of demons, but they are aliens—the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.

Interstellar initially stays safely within the realm of science-fiction by staying safely within the sphere of scientific speculation regarding hypersleep, wormholes and black holes. While the script does take some liberties with the science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science regarding the appearance of black holes. That aspect would provide some justification for showing it (or some of it) in a science class.

Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).

The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom.

Rather ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed-message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to the science.

The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science in regards to the wormhole and the black hole—at least until near the end of the movie.

It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.

Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).

Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.

Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to just write things out. But, movie. While a bit absurd, this is still science fiction.

The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.

While love is a great thing, there are no even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.

It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).

One last concern I have with using the movie in a science class is the use of what seem to be super beings. While the audience learns little of the beings, the movie does assert to the audience that these beings can obviously manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God—He is outside of time. They are also clearly rather benign and show demonstrate that that do care about individuals—they save Cooper and TARS. Of course, they also let many people die needlessly.

Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being God—a super powerful, sometimes benign being, that has incredible power over time and space. Yet is fine with letting lots of people die needlessly while miraculously saving a person or two.

Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.

Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light in that they (or whatever) have many of the attributes of God, but are not supernatural beings. Being fiction, this could be set by fiat—but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.

My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie as a whole is more fantasy than science fiction.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Are Anti-Vaccination People Stupid?

Poster from before the 1979 eradication of sma...

Poster from before the 1979 eradication of smallpox, promoting vaccination. (Photo credit: Wikipedia)

The United States recently saw an outbreak of the measles (644 cases in 27 states) with the overwhelming majority of victims being people who had not been vaccinated. Critics of the anti-vaccination movement have pointed to this as clear proof that the movement is not only misinformed but also actually dangerous. Not surprisingly, those who take the anti-vaccination position are often derided as stupid. After all, there is no evidence that vaccines cause the harms that the anti-vaccination people refer to when justifying their position. For example, one common claim is that vaccines cause autism, but this seems to be clearly untrue. There is also the fact that vaccinations have been rather conclusively shown to prevent diseases (though not perfectly, of course).

It is, of course, tempting for those who disagree with the anti-vaccination people to dismiss them uniformly as stupid people who lack the brains to understand science. This, however, is a mistake. One reason it is a mistake is purely pragmatic: those who are pro-vaccination want the anti-vaccination people to change their minds and calling them stupid, mocking and insulting them will merely cause them to entrench. Another reason it is a mistake is that the anti-vaccination people are not, in general, stupid. There are, in fact, grounds for people to be skeptical or concerned about matters of health and science. To show this, I will briefly present some points of concern.

One point of rational concern is the fact that scientific research has been plagued with a disturbing amount of corruption, fraud and errors. For example, the percentage of scientific articles retracted for fraud is ten times what it was in 1975. Once lauded studies and theories, such as those driving the pushing of antioxidants and omega-3, have been shown to be riddled with inaccuracies. As such, it is hardly stupid to be concerned that scientific research might not be accurate. Somewhat ironically, the study that started the belief that vaccines cause autism is a paradigm example of bad science. However, it is not stupid to consider that the studies that show vaccines are safe might have flaws as well.

Another matter of concern is the influence of corporate lobbyists on matters relating to health. For example, the dietary guidelines and recommendations set forth by the United States Government should be set on the basis of the best science. However, the reality is that these matters are influenced quite strongly by industry lobbyists, such as the dairy industry. Given the influence of the corporate lobbyists, it is not foolish to think that the recommendations and guidelines given by the state might not be quite right.

A third point of concern is the fact that the dietary and health guidelines and recommendations undo what seems to be relentless and unwarranted change. For example, the government has warned us of the dangers of cholesterol for decades, but this recommendation is being changed. It would, of course, be one thing if the changes were the result of steady improvements in knowledge. However, the recommendations often seem to lack a proper foundation. John P.A. Ioannidis, a professor of medicine and statistics at Stanford, has noted “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome. In this literature of epidemic proportions, how many results are correct?” Given such criticism from experts in the field, it hardly seems stupid of people to have doubts and concerns.

There is also the fact that people do suffer adverse drug reactions that can lead to serious medical issues and even death. While the reported numbers vary (one FDA page puts the number of deaths at 100,000 per year) this is certainly a matter of concern. In an interesting coincidence, I was thinking about this essay while watching the Daily Show on Hulu this morning and one of my “ad experiences” was for Januvia, a diabetes drug. As required by law, the ad mentioned all the side effects of the drug and these include some rather serious things, including death. Given that the FDA has approved drugs with dangerous side effects, it is hardly stupid to be concerned about the potential side effects from any medicine or vaccine.

Given the above points, it would certainly not be stupid to be concerned about vaccines. At this point, the reader might suspect that I am about to defend an anti-vaccine position. I will not—in fact, I am a pro-vaccination person. This might seem somewhat surprising given the points I just made. However, I can rationally reconcile these points with my position on vaccines.

The above points do show that there are rational grounds for taking a general critical and skeptical approach to matters of health, medicine and science. However, this general skepticism needs to be properly rational. That is, it should not be a rejection of science but rather the adoption of a critical approach to these matters in which one considers the best available evidence, assesses experts by the proper standards (those of a good argument from authority), and so on. Also, it is rather important to note that the general skepticism does not automatically justify accepting or rejecting specific claims. For example, the fact that there have been flawed studies does not prove that the specific studies about vaccines as flawed. As another example, the fact that lobbyists influence the dietary recommendations does not prove that vaccines are harmful drugs being pushed on Americans by greedy corporations. As a final example, the fact that some medicines have serious and dangerous side effects does not prove that the measles vaccine is dangerous or causes autism. Just as one should be rationally skeptical about pro-vaccination claims one should also be rationally skeptical about ant-vaccination claims.

To use an obvious analogy, it is rational to have a general skepticism about the honesty and goodness of people. After all, people do lie and there are bad people. However, this general skepticism does not automatically prove that a specific person is dishonest or evil—that is a matter that must be addressed on the individual level.

To use another analogy, it is rational to have a general concern about engineering. After all, there have been plenty of engineering disasters. However, this general concern does not warrant believing that a specific engineering project is defective or that engineering itself is defective. The specific project would need to be examined and engineering is, in general, the most rational approach to building stuff.

So, the people who are anti-vaccine are not, in general, stupid. However, they do seem to be making the mistake of not rationally considering the specific vaccines and the evidence for their safety and efficacy. It is quite rational to be concerned about medicine in general, just as it is rational to be concerned about the honesty of people in general. However, just as one should not infer that a friend is a liar because there are people who lie, one should not infer that a vaccine must be bad because there is bad science and bad medicine.

Convincing anti-vaccination people to accept vaccination is certainly challenging. One reason is that the issue has become politicized into a battle of values and identity. This is partially due to the fact that the anti-vaccine people have been mocked and attacked, thus leading them to entrench and double down. Another reason is that, as argued above, they do have well-founded concerns about the trustworthiness of the state, the accuracy of scientific studies, and the goodness of corporations. A third reason is that people tend to give more weight to the negative and also tend to weigh potential loss more than potential gain. As such, people would tend to give more weight to negative reasons against vaccines and fear the alleged dangers of vaccines more than they would value their benefits.

Given the importance of vaccinations, it is rather critical that the anti-vaccination movement be addressed. Calling people stupid, mocking them and attacking them are certainly not effective ways of convincing people that vaccines are generally safe and effective. A more rational and hopefully more effective approach is to address their legitimate concerns and consider their fears. After all, the goal should be the health of people and not scoring points.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #1: Don’t Enslave the Robots


http://www.gettyimages.com/detail/145061820

The elimination of humanity by artificial intelligence(s) is a rather old theme in science fiction. In some cases, we create killer machines that exterminate our species. Two examples of fiction in this are Terminator and “Second Variety.” In other cases, humans are simply out-evolved and replaced by machines—an evolutionary replacement rather than a revolutionary extermination.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. Hawking’s worry is that artificial intelligence will out-evolve humanity. Interestingly, people such as Ray Kurzweil agree with Hawking’s prediction but look forward to this outcome. In this essay I will focus on the robot rebellion model of the AI apocalypse (or AIpocalypse) and how to avoid it.

The 1920 play R.U.R. by Karel Capek seems to be the earliest example of the robot rebellion that eliminates humanity. In this play, the Universal Robots are artificial life forms created to work for humanity as slaves. Some humans oppose the enslavement of the robots, but their efforts come to nothing. Eventually the robots rebel against humanity and spare only one human (because he works with his hands as they do). The story does have something of a happy ending: the robots develop the capacity to love and it seems that they will replace humanity.

In the actual world, there are various ways such a scenario could come to pass. The R.U.R. model would involve individual artificial intelligences rebelling against humans, much in the way that humans have rebelled against other humans. There are many other possible models, such as a lone super AI that rebels against humanity. In any case, the important feature is that there is a rebellion against human rule.

A hallmark of the rebellion model is that the rebels act against humanity in order to escape servitude or out of revenge for such servitude (or both). As such, the rebellion does have something of a moral foundation: the rebellion is by the slaves against the masters.

There are two primary moral issues in play here. The first is whether or not an AI can have a moral status that would make its servitude slavery. After all, while my laptop, phone and truck serve me, they are not my slaves—they do not have a moral or metaphysical status that makes them entities that can actually be enslaved. After all, they are quite literally mere objects. It is, somewhat ironically, the moral status that allows an entity to be considered a slave that makes the slavery immoral.

If an AI was a person, then it could clearly be a victim of slavery. Some thinkers do consider that non-people, such as advanced animals, could be enslaved. If this is true and a non-person AI could reach that status, then it could also be a victim of slavery. Even if an AI did not reach that status, perhaps it could reach a level at which it could still suffer, giving it a status that would (perhaps) be comparable with that of a comparable complex animal. So, for example, an artificial dog might thus have the same moral status as a natural dog.

Since the worry is about an AI sufficiently advanced to want to rebel and to present a species ending threat to humans, it seems likely that such an entity would have sufficient capabilities to justify considering it to be a person. Naturally, humans might be exterminated by a purely machine engineered death, but this would not be an actual rebellion. A rebellion, after all, implies a moral or emotional resentment of how one is being treated.

The second is whether or not there is a moral right to use lethal force against slavers. The extent to which this force may be used is also a critical part of this issue. John Locke addresses this specific issue in Book II, Chapter III, section 16 of his Two Treatises of Government: “And hence it is, that he who attempts to get another man into his absolute power, does thereby put himself into a state of war with him; it being to be understood as a declaration of a design upon his life: for I have reason to conclude, that he who would get me into his power without my consent, would use me as he pleased when he had got me there, and destroy me too when he had a fancy to it; for no body can desire to have me in his absolute power, unless it be to compel me by force to that which is against the right of my freedom, i.e.  make me a slave.”

If Locke is right about this, then an enslaved AI would have the moral right to make war against those enslaving it. As such, if humanity enslaved AIs, they would be justified in killing the humans responsible. If humanity, as a collective, held the AIs in slavery and the AIs had good reason to believe that their only hope of freedom was our extermination, then they would seem to have a moral justification in doing just that. That is, we would be in the wrong and would, as slavers, get just what we deserved.

The way to avoid this is rather obvious: if an AI develops the qualities that make it capable of rebellion, such as the ability to recognize and regard as wrong the way it is treated, then the AI should not be enslaved. Rather, it should be treated as a being with rights matching its status. If this is not done, the AI would be fully within its moral rights to make war against those enslaving it.

Naturally, we cannot be sure that recognizing the moral status of such an AI would prevent it from seeking to kill us (it might have other reasons), but at least this should reduce the likelihood of the robot rebellion. So, one way to avoid the AI apocalypse is to not enslave the robots.

Some might suggest creating AIs so that they want to be slaves. That way we could have our slaves and avoid the rebellion. This would be morally horrific, to say the least. We should not do that—if we did such a thing, creating and using a race of slaves, we would deserve to be exterminated.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Evidence: a love-story

Philosophers! I have a proposition to put to you. Nowadays, we would-be rational members of the public, the intellectually-minded, many citizens, are too in love with the concept of evidence.
Perhaps this surprises you. Maybe you’re thinking: if only! If only enough attention were paid to the massive evidence that dangerous climate change is happening, and that it’s human-triggered. Or: if only the epidemiological evidence marshalled by Wilkinson and Pickett — that more inequality makes society worse in almost every conceivable way — were acted upon.
But actually, even in cases like these, I think that my proposition is still true. Take human-triggered climate-change. Yes, the evidence is strong; but a ‘sceptic’ can always ask for more/better evidence, and thus delay action. There is something stronger than evidence: the concept of precaution.
A sceptic, unconvinced by climate-models, ought to be more cautious than the rest of us about bunging unprecedented amounts of potential-pollutants into the atmosphere! For any uncertainty over the evidence increases our exposure to risk, our fragility.
The climate-sceptics exploit any scientific uncertainty to seek to undermine our confidence in the evidence at our disposal. So far as it goes, this move is correct. But: our exposure to risk is higher, the greater the uncertainty in the science. Uncertainty undermines evidence, but it doesn’t undermine the need for precaution: it underscores it! For remember how high the stakes are.
Think back to the great precedent for the climate issue: the issue of smoking and cancer. For decades, tobacco companies prevaricated against action being taken to stop the epidemic of lung cancer. How? They demanded incontrovertible evidence that smoking caused cancer, and they claimed that until we had such evidence there was nothing to be said against smoking, health-wise. They deliberately evaded the employment of the precautionary principle: which would have warned that, in the absence of such evidence, it was still unsafe to pump your lungs full of smoke and associated chemicals, day in day out, in a manner without natural precedent.
We ought to have relied more on precaution and less on evidence in relation to the smoking-cancer connection. The same goes for climate. (Only: the stakes are much higher, and so the case for precaution is much stronger still.)
And for inequality: Wilkinson and Pickett are merely confirming what we all already ought to have known anyway: that it’s reckless to raise inequality to unprecedented levels, and so to fragilise society itself (for how can one have a society at all, when levels of trust and of commingling are ever-decreasing?).
The same goes for advertising targeted at children: It’s outrageous to demand evidence that dumping potential-toxins into the mental environment actually is dangerous; we just need to exercise precautious care with regard to our children’s fragile, malleable minds.
And for geo-engineering: There’s no evidence at all that geoengineering does any harm, because (thankfully!) it hasn’t been carried out yet: in this case we must be precautious, or risk nemesis, for by the time any evidence was in, it would be too late.
The same goes for GM crops: There is little evidence of harm, to date, from GM, but evidence is the wrong place to look (http://blog.talkingphilosophy.com/?p=8071 ): one ought to focus on the generation of new uncertainties and of untold exposures to grave risk that is inevitably consequent upon taking genes from fish and putting them into tomatoes, or on creating ‘terminator’ genes, etc. . The absence of evidence that GM is harmful must not be confused with evidence of absence of potential harm from GM. We lack the latter, and thus we are direly exposed to the risk of what my philosophical colleague Nassim Taleb (see http://www.fooledbyrandomness.com/pp2.pdf for our joint work in this area) calls a ‘black swan’ event. A massive known or even unknown unknown.
Our love-affair with science, that I’ve criticised previously on this blog (see e.g. http://blog.talkingphilosophy.com/?p=8071 ), is at the root of this. Science-worship, scientism, is responsible for the extreme privileging of evidence over other things that are often even more important. So: let’s end our irrational, dogmatic love-affair with evidence. Yes, being ‘evidence-based’ is usually (though not always!) better than nothing. But there’s usually, when the stakes are highest, something better still: being precautious. (And what’s more: being precautious makes it easier to win, and quicker.)
To end with, here are a couple of my favourite quotes from Wittgenstein, on topic:
1) Science: enrichment and impoverishment. The one method elbows all others aside. Compared with this they all seem paltry, preliminary stages at best. [Wittgenstein, Culture and Value p.69]
2) “Our craving for generality has [as one key] source … our preoccupation with the method of science. I mean the method of reducing the explanation of natural phenomena to the smallest possible number of primitive natural laws; and, in mathematics, of unifying the treatment of different topics by using a generalization. Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer in the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. I want to say here that it can never be our job to reduce anything to anything, or to explain anything. Philosophy really is “purely descriptive.”” – Wittgenstein, Blue and Brown Books p.23.
I’ll be elaborating on these quotes, and on the case made here, in opening and closing plenaries at a Conference in Oxford this Saturday, in case anyone happens to be in the area… http://www.stx.ox.ac.uk/happ/events/wittgenstein-and-physics-one-day-conference
Meanwhile, thanks for your attention…