Tag Archives: Science

Interstellar, Science and Fantasy

Although I like science fiction, I did not see Interstellar until fairly recently—although time is such a subjective sort of thing. One reason I decided to see it is because some have claimed that the movie should be shown in science classes, presumably to help the kids learn science. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regards to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not

seen it, you might wish to hold off reading this essay.

While there have been numerous attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak has inquired of Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no.  This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”

Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see.  It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence.  The four points of the compass be logic, knowledge, wisdom, and the unknown.  Some do bow in that final direction.  Others advance upon it.  To bow before the one is to lose sight of the three.  I may submit to the unknown, but never to the unknowable”

In Lord of Light, the Rakshasa play the role of demons, but they are aliens—the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.

Interstellar initially stays safely within the realm of science-fiction by staying safely within the sphere of scientific speculation regarding hypersleep, wormholes and black holes. While the script does take some liberties with the science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science regarding the appearance of black holes. That aspect would provide some justification for showing it (or some of it) in a science class.

Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).

The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom.

Rather ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed-message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to the science.

The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science in regards to the wormhole and the black hole—at least until near the end of the movie.

It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.

Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).

Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.

Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to just write things out. But, movie. While a bit absurd, this is still science fiction.

The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.

While love is a great thing, there are no even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.

It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).

One last concern I have with using the movie in a science class is the use of what seem to be super beings. While the audience learns little of the beings, the movie does assert to the audience that these beings can obviously manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God—He is outside of time. They are also clearly rather benign and show demonstrate that that do care about individuals—they save Cooper and TARS. Of course, they also let many people die needlessly.

Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being God—a super powerful, sometimes benign being, that has incredible power over time and space. Yet is fine with letting lots of people die needlessly while miraculously saving a person or two.

Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.

Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light in that they (or whatever) have many of the attributes of God, but are not supernatural beings. Being fiction, this could be set by fiat—but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.

My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie as a whole is more fantasy than science fiction.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Critical Thinking, Ethics & Science Journalism

As part of my critical thinking class, I cover the usual topics of credibility and experiments/studies. Since people often find critical thinking a dull subject, I regularly look for real-world examples that might be marginally interesting to students. As such, I was intrigued by John Bohannon’s detailed account of how he “fooled millions into thinking chocolate helps weight loss.”

Bohannon’s con provides an excellent cautionary tale for critical thinkers. First, he lays out in detail how easy it is to rig an experiment to get (apparently) significant results. As I point out to my students, a small experiment or study can generate results that seem significant, but really are not. This is why it is important to have an adequate sample size—as a starter. What is also needed is proper control, proper selection of the groups, and so on.

Second, he provides a clear example of a disgraceful stain on academic publishing, namely “pay to publish” journals that do not engage in legitimate peer review. While some bad science does slip through peer review, these journals apparently publish almost anything—provided that the fee is paid. Since the journals have reputable sounding names and most people do not know which journals are credible and which are not, it is rather easy to generate a credible seeming journal publication. This is why I cover the importance of checking sources in my class.

Third, he details how various news outlets published or posted the story without making even perfunctory efforts to check its credibility. Not surprisingly, I also cover the media in my class both from the standpoint of being a journalist and being a consumer of news. I stress the importance of confirming credibility before accepting claims—especially when doing so is one’s job.

While Bohannon’s con does provide clear evidence of problems in regards to corrupt journals, uncritical reporting and consumer credulity, the situation does raise some points worth considering. One is that while he might have “fooled millions” of people, he seems to have fooled relative few journalists (13 out of about 5,000 reporters who subscribe to the Newswise feed Bohannon used) and these seem to be more of the likes of the Huffington Post and Cosmopolitan as opposed to what might be regarded as more serious health news sources. While it is not known why the other reporters did not run the story, it is worth considering that some of them did look at it critically and rejected it. In any case, the fact that a small number of reporters fell for a dubious story is hardly shocking. It is, in fact, just what would be expected given the long history of journalism.

Another point of concern is the ethics of engaging in such a con. It is possible to argue that Bohannon acted ethically. One way to do this is to note that using deceit to expose a problem can be justified on utilitarian grounds. For example, it seems morally acceptable for a journalist or police officer to use deceit and go undercover to expose criminal activity. As such, Bohannon could contend that his con was effectively an undercover operation—he and his fellows pretended to be the bad guys to expose a problem and thus his deceit was morally justified by the fact that it exposed problems.

One obvious objection to this is that Bohannon’s deceit did not just expose corrupt journals and incautious reporters. It also misinformed the audience who read or saw the stories. To be fair, the harm would certainly be fairly minimal—at worst, people who believed the story would consume dark chocolate and this is not exactly a health hazard. However, intentionally spreading such misinformation seems morally problematic—especially since story retractions or corrections tend to get far less attention than the original story.

One way to counter this objection is to draw an analogy to the exposure of flaws by hackers. These hackers reveal vulnerabilities in software with the stated intent of forcing companies to address the vulnerabilities. Exposing such vulnerabilities can do some harm by informing the bad guys, but the usual argument is that this is outweighed by the good done when the vulnerability is fixed.

While this does have some appeal, there is the concern that the harm done might not outweigh the good done. In Bohannon’s case it could be argued that he has done more harm than good. After all, it is already well-established that the “pay to publish” journals are corrupt, that there are incautious journalists and credulous consumers. As such, Bohannon has not exposed anything new—he has merely added more misinformation to the pile.

It could be countered that although these problems are well known, it does help to continue to bring them to the attention of the public. Going back to the analogy of software vulnerabilities, it could be argued that if a vulnerability is exposed, but nothing is done to patch it, then the problem should be brought up until it is fixed, “for it is the doom of men that they forget.” Bohannon has certainly brought these problems into the spotlight and this might do more good than harm. If so, then this con would be morally acceptable—at least on utilitarian grounds.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


The Woolly Mammoth became extinct around 12,00...

(Photo credit: Wikipedia)

Pausing in her grazing, a mother mammoth casts a wary eye for signs of danger to herself and her offspring. Hidden from her view, a saber-toothed cat assesses his chances of getting a meal…or getting stomped. The cat is startled by movement behind it and whirls about to confront a vehicle full of people. Digital photos are snapped, then uploaded to Facebook. “Damn tourists”, thinks the cat, as it saunters away.

While this scene is not yet a reality, there are people who hope to make it so through de-extinction. De-extinction is the restoration of a species that has been lost to extinction. The most famous fictional example is Jurassic Park: dinosaurs are restored and made the central focus of an amusement park. There have been real-life attempts at restoring lost species, but these have focused on species that went extinct far more recently than the dinosaurs.

There are various ways in which a species can be restored. The best known (thanks to the movies) is genetic restoration: the genes of the species are recovered and used to recreate the species. For example, recovered mastodon DNA could be implanted into an “emptied” elephant egg and the egg could then be implanted into a female elephant. If the process succeeded, the surrogate mother would give birth to an actual mastodon.

A somewhat less known method is “trait” or “appearance” restoration. In this method, an extinct species is recreated by selectively modifying an existing species until it looks like the extinct species. For example, an extinct species of pigeons could be “restored” in this manner. One rather obvious question about this method is whether or not such a restoration should be considered an actual de-extinction. To use the obvious analogy, if after my death someone is modified to look like me, then I have not been restored to life. Likewise, creating a species that looks (and acts) like the extinct species does not seem to really restore the species. Rather, a rather clever imposter has been created.

In additional to the practical concerns of the science and technology of de-extinction, there are also moral concerns. Not surprisingly, many of these concerns involve he potential consequences of de-extinction.

One matter of concern is that the de-extinction of a species could actually have negative consequences for other species or the environment. A restored species could become an invasive and harmful species (directly or indirectly), which would be rather bad and has been shown by existing invasive species that have been transported by humans into new environments. In the case of de-extinction, humans would be re-created rather than transporting-but the effect could be quite similar.

It can be replied that the impact of a species could be sorted out ahead of time, especially if the species went extinct fairly recently. The counter to this reply is to point out that people have made rather serious mistakes when importing species and that it is not unreasonable to believe that people could make comparable mistakes.

Another matter of concern that a species could be restored despite there not being a viable habitat for it. This sort of irresponsible de-extinction might occur for a variety of reasons, perhaps to provide a novelty attraction for a zoo or park. This sort of treatment of an animal would certainly seem to be wrong because of the exploitation of the species. The reply to this is the same that is given when species that are close to extinction are kept in zoos or parks: such an existence is better than no existence. This does have a certain appeal, but it could be contended that restoring an animal to keep it in a zoo is relevantly different from endeavoring to preserve an existing species. It could also be contended that the zoo preservation of endangered species is wrong, hence the restoration of an extinct species to serve as a zoo exhibit would also be wrong.

One common argument against re-extinction is that it would be expensive and it would thus take money away from conservation efforts that would yield more results for the money. While I cannot predict the exact cost of restoring a mastodon, it seems safe to predict that it would be extremely expensive. This money could, one might argue, be better spent in protecting elephants.

While such cost arguments have considerable appeal, they often suffer from an obvious defect. This defect is that the argument fails to take into account the fact that there is not just one pool of money that is allocated to this matter. That is, money spent on restoring a species need not come from the money that would otherwise be spent on preserving existing species.

While it could be argued that money spent on de-extinction would be better spent elsewhere, it could very well be the case that the money spent on de-extinction would not, in fact, be spent on anything better. To use an obvious example, a wealthy celebrity might not care much about the plight of the snail darter, but he might be willing to spend millions of dollars to get a saber-toothed cat. To use another example, an investor might not be interested in spending money to save elephants, but she might be very interested in funding a Mammoth Park featuring restored mammoths and other charismatic but extinct species that people would pay to see. Interestingly, this sort of funding could itself raise moral concerns. That is, bringing back the mammoths so some investors can make a fortune on Mammoth Park might strike some as morally dubious.

Laying aside the moral concerns connected to why we should not engage in de-extinction, there is also to matter of why we should (morally) do this. In the case of natural extinctions, it would seem that we would not have a moral reason to restore a species. After all, humans were not responsible for its demise. Naturally, we might have pragmatic (to create Mammoth Park) or scientific reasons to restore such a species.

In the case of human caused extinctions, a case can be made that we should undo the (alleged) wrong that we did. This line of reasoning has the most appeal. After all, if we were responsible for the death of a species and we could restore this species, then it would seem that we should do so. To use the obvious analogy, if I kill someone (by accident or by intent) and then I get the means to restore the person, then I should do so (unless, of course, killing the person was the right thing to do).

In any case, I am waiting for my dire wolf-husky crossbreed.

My Amazon Author Page

Enhanced by Zemanta

Gaukroger, religion, and the rise of science

I have been reading two huge and detailed books on the rise of modern science: Stephen Gaukroger’s The Emergence of a Scientific Culture: Science and the Shaping of Modernity, 1210-1685 (Oxford and New York: Oxford University Press, 2006); and Gaukroger’s The Collapse of Mechanism and the Rise of Sensibility: Science and the Shaping of Modernity, 1680-1760 (Oxford and New York: Oxford University Press, 2010). These are the first two volumes (the only ones so far) in an ongoing series by Gaukroger examining the advance of science.

I started on this exercise in response to an anonymous reviewer for Wiley-Blackwell, which will be publishing my co-authored book with Udo Schuklenk, 50 Great Myths About Atheism. The idea put to us by the reviewer was that Gaukroger has demonstrated the vital role of Christian theology in assisting the consolidation of science in early modern Europe. I must say that I’m not totally convinced.

As its title implies, the first volume covers European intellectual history from the rise of neo-Aristotelian natural philosophy in the 13th century, through developments in the 16th and early 17th centuries, involving Copernicus, Galileo, Hobbes, Gassendi, Kepler, Descartes, and others, to the spectacular flowering of science in the work of Sir Isaac Newton, Robert Hooke, and many others in the late 17th century.

Obviously, all these men appeared in cultures that gave them the intellectual and other resources for their work, but when you trace through the detail of what motivated them, how they influenced each other, and so on, not much of that has to do with Christianity. What most comes across is their fascination with experiments, thought experiments, and each other’s ideas, and in many cases their joy-cum-obsession with the new tools that had become available to them in the form of scientific instruments, precision crafted experimental apparatus, and increasingly powerful kinds of mathematics.

Gaukroger sees his central question as being why a large-scale, successfully legitimating consolidation of science took place in Europe in the 17th century (and thereafter) – when science tends to be fragmented and stop-start, with long periods of stagnation, whenever it has appeared in a promising form in other times, places, and cultures. He answers that the natural philosophy of the Scientific Revolution was attractive to many thinkers in the 17th and 18th centuries precisely because it appeared to have promise for the renewal of natural theology.

There may be something in this, although before I go on let’s pause to note it is very different from saying that there was something about Christianity that made it inherently pro-science in the first place. Gaukroger does not appear to maintain any such thesis (and nor, as far as I know, does the anonymous reviewer that I mentioned).

Indeed, Gaukroger notes that there was a considerable tradition within ancient and medieval Christianity of opposition to natural philosophy (and hence anything resembling science), seeing it as distracting or even idolatrous. Nothing in his books seems to give late medieval scholasticism much credit for the rise of science (it appears that whatever science it produced in the 13th and 14th centuries was not fruitful, and stagnated much like in other cultures that showed promising beginnings in scientific thought, such as China and medieval Islam). Indeed, even Aristotle’s form of natural philosophy was initially resisted by the 13th-century Church, although the synthesis produced by Aquinas was later given the Church’s endorsement.

Renaissance natural theology was largely an attempt to reconcile Aristotelianism with theology, which may well have been intellectually fruitful in some ways, but the Church was harsh to anyone who drew conclusions that strayed beyond orthodoxy. If anything, Christianity seems to have acted more as a hindrance than otherwise to free inquiry into the phenomena of the natural world (though, of course, even resistance can sometimes be inspiring).

Late in The Emergence of a Scientific Culture, Gaukroger discusses the (largely British) phenomenon of physico-theology: the attempts by some theologians, scientists (as we’d now call them), and philosophers to reconcile theology with what was emerging from science, or even to use scientific findings to support or revitalise theology. He writes interestingly of thinkers, such as Ralph Cudworth, who embraced a version of the atomist view of the natural world that had become popular within science, while attempting at the same time to modify it and to include it in their metaphysical systems. Gaukroger then deals at some length with others who attempted to reconcile scientific theories of the formation of the Earth with the Genesis account of creation and the biblical chronology of history. He puts an impressive enough case that in the 1680s and 1690s, especially in the UK, there was a widespread view that natural philosophy could be used as a source of evidence for God.

But none of shows that the successful consolidation of science in the 17th and 18th centuries had much to with Christianity. On the face of it, I’d have thought that the successful consolidation of science at this point in history owed more simply to its unprecedented theoretical successes, the causes of which were contingent and complicated – perhaps having to do with some of the personalities involved, perhaps with non-religious aspects of European culture, perhaps with breakthroughs in mathematics and scientific instrumentation. And perhaps with other things. I don’t see any densely argued case for giving much credit to religion.

About the most that could be said with any confidence is that, back in, say, 1600, orthodox theology might have looked like a very formidable barrier for science to overcome. After all, as Gaukroger says in The Collapse of Mechanism and the Rise of Sensibility, “Christianity … had traditionally laid claim to universal competence in all matters of understanding the world and our place in it, most notably in its Augustinean version”, but as he immediately adds this claim was decisively weakened during the seventeenth century. Despite the terrible execution of Giordano Bruno in 1600, for a mix of sins in the eyes of the Church, and the persecution of Galileo not long after, Christianity did not do all that much to block the rise of science in the second half of the century.

Given Christianity’s longstanding claims to universal epistemic competence, it is no wonder that it came into conflict with Aristotelian natural philosophy and later with early modern science, personified by Galileo. These stood to draw their own conclusions and to challenge theology’s authority.

Thus, Gaukroger is doubtless correct when he makes much of the issue of the relationship between the epistemic authority of Christianity and that of natural philosophy (or science). He says, I think justly, that the issue of the relationship between “the kind of understanding of the world that natural philosophy provides, and that provided by Christian revelation and natural theology” was a pressing one in Christian Europe from the beginning of the 13th century, when Aristotelian texts and doctrines were introduced into the intellectual culture.

Given the intellectual hegemony of Christianity, it can be argued that the ability of science to consolidate itself depended on its relationship with Christian thought. On this hinged the ability of science to establish itself in the late 17th and early 18th centuries “as a permanent and integral feature of Western intellectual life” (The Collapse of Mechanism and the Rise of Sensibility).

During this period, as Gaukroger reminds us, it was widely understood as a requirement for natural philosophy that its theories be compatible with shared assumptions in Europe about morality, our place in the world, and religious thinking in general. In the upshot, science conformed – to some extent, it avoided heresy by carefully defining its field of inquiry as the natural world (while drawing a sharp boundary with the supernatural world), and to some extent it produced theories that ultimately appealed to the actions of God, as we find in the work of Newton.

All this, however, is not so much Christian theology nurturing science as simply not proving to be such a formidable barrier as first appeared. To some extent, it was a matter of science accommodating itself to Christianity. To some extent, it may, indeed, have been certain theologians welcoming the findings of science as a resource for theology. But to some extent it may simply be that Christianity had lost much of its intellectual hegemony for totally different reasons – partly, perhaps, because of the disastrous Thirty Years’ War, and partly because of extensive contact with other cultures in the New World and the Far East, which also tended to undermine absolutism and certainty.

Despite Gaukroger’s extensive scholarship, there’s still a story to research and tell here – a story about how Christianity increasingly lost its intellectual authority, and why it was, perhaps, increasingly less in a position to hinder the rise of science and competition from other epistemic rivals.

I’m glad I had my attention drawn to these books. I began reading them to see what they have to say about the interaction between early science and Christian theology. But, although that is a recurring theme, it does not dominate the discussion by any means, and much of the fascination is simply in getting a consolidated and detailed account of how science developed, hypothesis by hypothesis, contributor by contributor, step by step, in its early centuries, and how it interacted with much else, such as the broader literary and intellectual culture of Europe. Taken together, The Emergence of a Scientific Culture and The Collapse of Mechanism and the Rise of Sensibility form an extraordinarily scholarly and exhaustive account of what was going on during a crucial period in intellectual history, as high medieval culture gave way to early modernity, and then the Enlightenment era.

[This post is based on a series of posts over on my personal blog.]

[Pssst: Check out my books at Amazon. Not least Freedom of Religion and the Secular State.]


My new book, WITTGENSTEIN AMONG THE SCIENCES, is out today. I am feeling pretty excited; it looks GREAT. Have a look, here: http://ashgate.com/default.aspx?page=637&calcTitle=1&title_id=11016&edition_id=14506
What is the book about? I would describe it as a broadly post-Schutzian attempt to understand the nature of science, through working through and from the work of Wittgensteinians such as Kuhn and Winch. One of the aspects of it that may be of especial/broader interest is that I seek to inform policy-debates around science through it: e.g. to argue that science-policy ought to be relatively free of government direction, unlike technology-policy which should be subject to tight social constraints. In Part 2 of the book, I seek to employ Wittgensteinian thinking to help in the practical business of understanding the nature of psychopathology. Including the pscyhopathology of unrestrained economism… That is: I argue for instance that, while Friedman’s celebtrated monetary treatise on the U.S. economy and the Great Depression put the latter down in significant part to a failure of monetary policy to make enough money available, one key factor behind the 2007-now economic and financial crisis is a dubious thingifying attitude to money that was _encouraged_ by Friedmanian monetarism and that can be implicitly seen writ large in Friedman’s famous and hugely-influential article, “The methodology of positive economics”.
Enough tasters. See what you think. Let me know here?
(There is an ebook version available, btw.)

Thanks to everyone who helped me with the book, especially my editor Simon Summers. I’d like to mention particularly that the book was also greatly influenced by Wes Sharrock (a Winchian genius) and Bojana Mladenovic (whose work on Kuhn I bow to, which is not the kind of thing I say very often!).

Reasoning, Autism & Vaccines

Good parents are protective of their children and fear things they think could do them harm. Unfortunately, parents (like anyone else) can be mistaken about what is and is not harmful. This can occur because, ironically, people tend to reason poorly when they are afraid. This is ironic because such situations are when we really need our reasoning skills the most. One such situation is the controversy over vaccines and autism.

Dr. Paul Offit recently wrote a book on the subject, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure, that has generated a great deal of controversy. One of the purposes of the book is to address the alleged causal link between vaccines and autism. His position, which is well supported by the weight of scientific research, is that vaccines do not cause autism and are safe for children.

Despite the fact that the weight of evidence shows that vaccines (including the thimersol that was once used as a preservative in some vaccines) do not cause autism, many people still believe there is a connection. Part of this is due to the fact that a now retracted 1998 study suggested a link between vaccines and autism. Part of this is due to the fact that lawyers, celebrities, and people in the media continue to claim that there is a connection. Even John McCain asserted his belief in the connection. In some cases, these people are honestly mistaken. In other cases, they stand to benefit from such claims.

Not surprisingly, Offit is the target of anger and even threats. In the case of people who are willfully misleading the public, this is to be expected. In the case of people who are honestly mistaken, this might seem surprising. After all, one would think they would be grateful to know the truth. Of course, there are those who doubt that Offit has the truth.

Offit’s critics contend that he stands to gain financially from vaccines and hence is biased. Offit profited from the sale of the vaccine RotaTeq and has also been a paid and unpaid consultant for the drug company Merck (which now manufactures RotaTeq).

From a critical thinking standpoint, this concern is reasonable. After all, a person’s credibility is reduced to the degree that they are biased and money is a strong biasing factor. When people raise this concern, they are reasoning well-provided that they are raising it on good grounds and not as a rationalization for their rejection of his claims.

While money is a biasing factor, when assessing the credibility of a source it is also important to consider the whole picture and not just one factor. To do otherwise would be fall victim to a bias. Offit’s personal history and behavior seem to indicate that he is more concerned about the well being of children than about money and people do speak highly of him. These factors might very well offset any bias from his financial ties to vaccines and, of course, his scientific credentials are quite solid.

Fortunately, resolving the key issue (whether vaccines cause autism or not) does not depend on Offit’s credibility alone. There has been extensive research into the alleged connection between vaccines and autism and, as noted above, the weight of the evidence is overwhelmingly against there being a meaningful connection. Why, then, do people still believe that there is a connection?

First, as mentioned above, there are celebrities, people in the media, lawyers and others who claim that there is a connection. People are often bad at discerning between legitimate authorities on a subject and people speaking on that subject who are famous for something else (like being an actor). When people make this error they are committing a fallacious appeal to authority.

Second, people are generally poor at scientific reasoning and critical thinking. Even college educated people. My own university’s assessment process revealed that most of our students are weak in these areas and similar assessment at other schools have revealed similar results. These results match my own experiences teaching critical thinking. When people are not very good at scientific reasoning and critical thinking, they tend to not know how to assess such research and also tend to be less influenced by logical arguments. Instead, they tend to be more influenced by emotional factors and poor reasoning. This leads to the third reason.

Third, people are more influenced by their emotions than by reason. Many parents are worried that their child will develop autism. Thus, fear and love leads them to be concerned about anything that might cause autism. These emotions can impede their ability to assess the matter rationally and they can come to feel that vaccines are a threat when they hear about the connection. If they are not good at critical thinking, they will not be able to properly investigate the matter and hence will tend to stick with how they feel rather than finding out what would be most rational to think.

Fourth, most people tend to be more influenced by poor reasoning than by good reasoning. As I tell my students, fallacies tend to be far more persuasive than logical arguments. After all, people tend to feel far more strongly than they think. Further, people tend to fall into very predictable patterns of poor reasoning and accept the results as true.

In the case of autism and vaccines, people seem to fall into the post hoc fallacy. This allacy derives its name from the Latin phrase “Post hoc, ergo propter hoc.” This has been traditionally interpreted as “After this, therefore because of this.” This fallacy is committed when it is concluded that one event causes another simply because the proposed cause occurred before the proposed effect. In the case at hand, parents might notice their child showing signs of autism after receiving vaccinations and assume that the vaccinations are the cause. However, without adequare evidence linking the two, this would not be a reasonable inference.

Fifth, when people do not know what is causing something harmful, they can start grasping at explanations. People, sensibly enough, do not like being ignorant of what is causing autism. While this does motivate people to search for a cause, it can also lead people to simply pick an explanation so that they now feel more in control. Of course, if someone does not have a good grasp of critical reasoning, they can accept something as a cause that really is not. For example, people have explained illnesses and deaths in terms of witchcraft, curses and vampires. Today, few people would attribute autism to supernatural causes, but taking vaccines as the cause without adequate evidence can be seen as somewhat similar: a cause (or scapegoat) must be found to make people feel better.

Does this mean that vaccines have no possible link to autism and the people who worry about it are foolish? No, clearly not. There could be cases in which a vaccine has triggered autism by interacting with many other factors and it is reasonable to be concerned about such a possibility. However, it is also important to approach the matter rationally and not let fear lead people into making unwise choices.

After all, while there is the possibility that vaccines might have some link to autism, it is well established that vaccines protect children from very real harms. Some parents, afraid of the alleged link, have not vaccinated their children or are not following the recommended schedules. Given the serious consequences of some of these diseases, a failure to vaccinate properly could be very harmful to the children. Naturally, these vaccines should be made as safe as possible.

What Has Philosophy Done Lately?

In a previous post, I addressed the question of the value of philosophy. As one comment pointed out, even if it is granted that philosophy did many wonderful things in the past, there is still the obvious question: what has philosophy done for us lately?

Not surprisingly, I have to address this question when I teach my Introduction to Philosophy class. The students generally accept that philosophy has been of some service in the past, but they do want to know what the class has to offer them now (aside from the credit hours and knocking off a humanities requirement). Like most philosophy professors, I speak of the value of developing their intellectual abilities, of considering timeless problems, of becoming critical thinkers and of broadening their minds. Once in a while, a particularly clever student will ask the dreaded question: “can’t we get all that, plus some useful information and skills, from some other class?” Put in more general terms, the challenge is this: does philosophy have anything special to offer people today that they cannot get elsewhere?

Addressing this question first requires considering the nature of philosophy. Defining the word “philosophy” is easy enough. It means “the love of wisdom.” Of course, this does not say very much about what philosophy really is all about.

Plato offered a clear account of the nature of philosophy. Philosophers are lovers of wisdom and are distinct from the lovers of sights and sounds. His metaphysics and epistemology provided a clear distinction between philosophy and other fields. They also made it clear why philosophy has value.

To be specific, philosophers are concerned with the pure, perfect, and eternal forms (such as justice). These forms are components of the true reality and all other things are but inferior copies. Hence, knowledge (as opposed to mere opinion) is based on the forms. Roughly put, philosophy has value because it deals with what is true and real. In stark contrast, the lovers of sights and sounds are concerned with the inferior objects of the physical world. Hence, they deal with mere opinion rather than knowledge.

So, on Plato’s view, scientists (such as Wolpert) who study physical phenomena are not advancing knowledge. Instead, they are merely playing with copies and developing opinions. Thus, they are the ones that have contributed nothing to knowledge. They have merely piled up opinions.

An obvious reply to this is that there are excellent arguments against Plato’s epistemology and metaphysics. Such arguments would undercut the sort of case for philosophy’s value. A second obvious reply is that it seems problematic and even question begging to base an entire discipline on the very specific views of one person. Surely one should be suspicious of defining all of philosophy on the basis of one person, even if that person is Plato.

What is needed, obviously enough, is a suitable definition of philosophy. This definition needs to meet the conditions of a good definition (avoiding circularity, avoiding being too narrow or too broad, and so on), of course. With such a definition in hand, one can start looking to see what philosophy has done for us lately.

It should not be expected that such a definition would include everything that now gets labeled as philosophy by professional philosophers. Further, the definition might very well allow in things that many professional philosophers would reject.

I must admit that I do not have such a definition. I obviously have beliefs about what counts as philosophy, but I do not have a list that provides the necessary and sufficient conditions. I can, of course, point to what philosophers have done and what we count as philosophy. But, such an approach is sorely lacking. That task must fall to another time and to other minds.

For now, perhaps the current rough view of philosophy can be used to see if philosophy has done anything useful lately.

From a pragmatic standpoint, philosophy does do useful things: people get paid to teach it, students get credit to take classes in it, books are sold about it and so on. Of course, that is not the sort of value that is of concern here.

One problem with discerning the value of philosophy is that much of what philosophy used to do has been taken over by others. As noted in the earlier blog, philosophy gave rise to science and logic, but these areas have been taken over (partially or completely) by others. This process is ongoing and not just something that happened with the rise of formal science.

To give two examples, consider critical thinking and ethics. Not so long ago, critical thinking was largely considered to “belong” to philosophers. However, in recent years “critical thinking” has become as buzz phrase and many want a slice of the critical thinking pie (in part because there is now money to be made as critical thinking consultants). My own university recently had sessions on critical thinking for the faculty. Interestingly, philosophers were not involved. Further, there is a university wide Quality Enhancement Program (yet another buzz phrase) that is now fixated on critical thinking. Oddly enough, though I have taught the critical thinking class on campus for fifteen years, I was never asked to participate. None of my colleagues were asked, either. Apparently, this is not uncommon and it seems likely that critical thinking will, perhaps in short order, no longer be consider part of philosophy. If so, this will make philosophy seem even less useful.

In regards to ethics, many schools offer specialty ethics classes that are not taught by philosophers. For example, the school of business at my university has a business ethics class that is taught by a business professor. Similarly, there are other professional ethics classes taught within specific deparments. On one hand, this does make sense: someone in the field would tend to know more about the specific ethical expectations in the field.  This is one reason given for having specific ethics classes taken over by non-philosophy departments. On the other hand, since I would not be qualified to teach business classes or nursing classes, it seems that a business professor or nursing professor would not be qualified to teach ethics. Those more cynical than I might say that these departments created the ethics classes to boost their classes (department budgets and available faculty positions are often connected to the number of students enrolled). If ethics continues to be taken over by specific fields (analogous to how the sciences split off), then there will be less that philosophers can point to in terms of the value of their discipline.

Some people (including philosophers) have predicted the end of philosophy. Perhaps if philosophers are left with nothing useful to do, that will be the end of philosophy as an independent discipline. While parts of it will remain, they will be incorporated in other disciplines. Unless, of course, there is something philosophy does that is unique to philosophy and cannot be stolen away (then again, perhaps anything can be stolen).

One role that philosophers have long held and still hold is that of intellectual scouts. For example, in the case of the sciences, philosophers scouted out the intellectual territories that would eventually become the sciences. This scouting is, obviously enough, not physical scouting. Rather, philosophers explored possible methodology, questions, content and problems. From these explorations, philosophers developed rough maps. After the territory had been scouted, others came to these intellectual lands and began to colonize them. The initial crude villages grew into towns and then into cities. Naturally, those who work in these massive cities sometimes forget those early explorers who made the cities possible. However, the value of their efforts remain.

While some have claimed that there is nothing new under the sun, the scope of our ignorance seems to vastly exceed the scope of our knowledge. Literally and figuratively, there is at least one universe that we have but begun to explore. As such, intellectual scouts are still of great importance. While some of the scouting parties are launched from established cities (that is, scientists and such exploring their own fields) there are still undiscovered countries that belong to no other established discipline. Philosophy, I think, can and should stake her claim to these areas and set out once more in the spirit that got her started in the first place. Naturally, others will follow and build cities there. Some of them will remark about how useless philosophy has been and is, forgetting all the while the importance of scouts and explorers.

The Value of Philosophy, Yet Again

One of the most annoying things about being a professional philosopher is the fact that I so often am called upon to defend the value of my profession and my discipline. One thing that makes it especially annoying is that so many philosophers have written so much about the value of philosophy (including, of course, Russell’s work on the subject). One would think that the value of philosophy would be a settled matter by now. However, this is not the case.

Like almost all professors, I have to deal with the occasional student who questions the value of my discipline in general or my class in particular. I have, naturally enough, worked out a well developed reply to such questions. In addition to the challenges put forth by students, philosophers also face a challenge put forth by fellow academics. For example, The Philosophers’ Magazine (third quarter 2008, pages 120-126) features an article by Julian Baggini in which Lewis Wolpert’s view of philosophy is discussed. Wolpert puts forth the usual charge against philosophy: “…philosophy is not successful. It has achieved nothing.” (page 121). He does concede that Aristotle did make a difference and does allow a place for political and moral philosophy. Other than that, he regards philosophy as not making “the slightest difference” in regards to what we know.

Naturally enough, these criticisms have some plausibility. Philosophy has long been attacked because it bakes no bread, builds no weapons, and seems to do nothing. In short, philosophy seems to be useless. If this is the case, then philosophy professors like me have worked out quite a scheme: we get paid to achieve nothing. However, I think that Wolpert and the other critics are fundamentally mistaken about the value of philosophy.

One stock argument is to present the accomplishment of philosophers such as Thales, Descartes, Leibniz, Newton and others. Since these people accomplished so much in terms of science, mathematics, and geometry it would seem mistaken to regard philosophy as lacking in achievements.
Of course, there is an obvious reply to this. While Thales, Descartes, Leibniz, and Newton were all philosophers, it could be argued that their achievements were within other disciplines. For example, Descartes’ work in mathematics and geometry were great achievements-of mathematics and geometry. To use analogy, while I am a philosopher and I have won 5Ks and 10Ks, it would be incorrect to say that philosophy has achieved victories in running. Rather, I just happen to be a philosopher who is also a runner. It is as a runner that I accomplish such achievements. Likewise, it is as a scientist that Newton accomplished his great achievements. Thus, the mere fact that philosophers have had great achievements does not entail that philosophy has achieved anything.

Another stock argument is to present achievements that seem to clearly be within the discipline of philosophy. The modern sciences, it is often argued, arose from philosophy (mainly what was known as “natural philosophy”). Further, logic, critical thinking and reasoning are all within the domain of philosophy. Wolpert himself notes the importance of avoiding logical contradictions (page 125) when using the scientific method. Thus, it would seem that philosophy has achieved something after all.
Not surprisingly, there are ways to reply to this defense of philosophy.

In regards to the sciences, it can be argued that while philosophers did contribute to the rise of the sciences, they did so as scientists (or pre-scientists). This is a variation on the argument given above. It could be conceded that (as Wolpert does for Aristotle) that philosophy did give rise to the sciences. However, it could be argued that this is analogous to parents having children who accomplish great things. While the child would not exist without the parents, the children’s accomplishments are their own and hence do not count as achievements for the parents. Philosophy can, of course, take pride in bringing such children into the world. But that is all the credit she deserves.

The matter of logic (broadly taken) does present a tougher dragon to slay. On the face of it, there seem to be two important points here. First, logic belongs to philosophy. Second, logic is extremely useful and seems to be quite a feather in philosophy’s cap. Not to brag, but logic is critical to the information age. Without such logic, there would be no PCs, no internet, no Nintendo Wiis, no Xboxes (360 or otherwise), and no iPods. This alone should refute the charge that philosophy has achieved nothing. Of course, logic and its various domains (such as critical thinking) are also useful in many other ways. Imagine a world without logic and critical thinking and their value seems evident.

This would seem to provide philosophy with an iron clad claim to achievements. However, perhaps philosophy can still be robbed of her prize.

One way to rob philosophy in this matter is to argue that logic belongs to another discipline or that specific types of logic belong to specific disciplines. For example, symbolic logic could be seen as belonging to the discipline of mathematics. The logic used in computers could be seen as belonging to computer science. Scientific and professional reasoning (law, economics, business, etc.) could be seen as belonging to those disciplines. This approach, obviously enough, mimics that used by Socrates against Ion. Socrates argued that the specific content of a poem belonged not to poetry but rather to some other field. For example, while chariot racing is described in the Iliad, the art of racing does not belong to poetry and poets cannot claim the accomplishments of the chariot racers as their own. Likewise, while philosophers talk about logic, logic does not belong to philosophy. Hence, philosophy deserves no credit for the value of logic. Rather, proper credit belongs to all the various disciplines that own a piece of logic.

In defense of philosophy, it can be argued that while other disciplines have employed and developed logic, philosophy deserves the credit for creating logic. To use an analogy, to deny philosophy credit for logic would be like denying Thomas Edison credit for his inventions because other people have developed them in so many new and useful ways over the years.

While this seems like a reasonable argument, there is a way to counter it. When I was in graduate school, I first encountered what turned out to be a standard means of arguing that philosophy accomplishes nothing. Put bluntly, the tactic is to argue that every accomplishment attributed to philosophy belongs to another discipline. This is often done by defining “philosophy” in such a way that achieving results means that one is no longer practicing philosophy but doing something else. For example, once a philosopher begins to develop logic, then he is no longer doing philosophy. Hence, philosophy did not even give the world the beginnings of logic.

This approach does, in a way, work. If the discipline of philosophy is defined in a way that precludes achievement, then philosophy can (by definition) never achieve anything. The same sort of method can be used to “prove” that a liberal can never accomplish anything. Just define “liberal” such that if someone achieves something, then she is not a liberal.

There seems to be no compelling reason why philosophers should accept this view of philosophy. Naturally enough, those who claim philosophy accomplishes nothing would need to provide an adequate defense of such a definition. Philosophers are, of course, obligated to provide an alternative definition.