Tag Archives: Epistemology

Believing What You Know is Not True

“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”

Stephen Colbert

 

While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from the land of comedy into the realm of philosophy (though detractors might claim that philosophy is comedy without humor; but that is actually law). Colbert has what seems to be an odd epistemology: he regularly claims that he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as merely comedic, it does raise many interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.

While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold the opinion that it is true. A belief is true, in the common sense view, when it gets reality right—this is the often maligned correspondence theory of truth. The stock simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks, but shall suffice for this discussion.

Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.

One possible response is to point out that the human mind is not beholden to the rules of logic—while a contradiction cannot be true, there are many ways a person can hold to contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold to both.  This might be due to an ability to compartmentalize the beliefs so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.

While these responses do have considerable appeal, they do not appear to work in cases in which the person actually claims, as Colbert does, that they believe something they know is not true. After all, making this claim does require considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they belief something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.

One possibility is to consider the power of cognitive dissonance management—a person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem. I will explore this possibility in the context of comfort beliefs in a later essay.

Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true—even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.

Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty sided die, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true—assuming that this means that I can believe in something that I believe is less likely than another belief.

People are also strongly influenced by emotional and other factors that are not based in a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Skepticism, Locke & Games

In philosophy skepticism is the view that we lack knowledge. There are numerous varieties of skepticism and these are defined by the extent of the doubt endorsed by the skeptic. A relatively mild case of skepticism might involve doubts about metaphysical claims while a truly rabid skeptic would doubt everything—including her own existence.

While many philosophers have attempted to defeat the dragon of skepticism, all of these attempts seem to have failed. This is hardly surprising—skepticism seems to be unbreakable. The arguments for this have an ancient pedigree and can be distilled down to two simple arguments.

The first goes after the possibility of justifying a belief and thus attacks the standard view that knowledge requires a belief that is true and justified. If a standard of justification is presented, then there is the question of what justifies that standard. If a justification is offered, then the same question can be raised into infinity. And beyond. If no justification is offered, then there is no reason to accept the standard.

A second stock argument for skepticism is that any reasonable argument given in support of knowledge can be countered by an equally reasonable argument against knowledge.  Some folks, such as the famous philosopher Chisholm, have contended that it is completely fair to assume that we do have knowledge and begin epistemology from that point. However, this seems to have all the merit of grabbing the first place trophy without actually competing.

Like all sane philosophers, I tend to follow David Hume in my everyday life: my skepticism is nowhere to be seen when I am filling out my taxes, sitting in brain numbing committee meeting, or having a tooth drilled. However, like a useless friend, it shows up again when it is no longer needed. As such, it would be nice if skepticism could be defeated or a least rendered irrelevant.

John Locke took a rather interesting approach to skepticism. While, like Descartes, he seemed to want to find certainty, he settled for a practical approach to the matter. After acknowledging that our faculties cannot provide certainty, he asserted that what matters to us is the ability of our faculties to aid us in our preservation and wellbeing.

Jokingly, he challenges “the dreamer” to put his hand into a furnace—this would, he claims, wake him “to a certainty greater than he could wish.” More seriously, Locke contends that our concern is not with achieving epistemic certainty. Rather, what matters is our happiness and misery. While Locke can be accused of taking an easy out rather than engaging the skeptic in a battle of certainty or death, his approach is certainly appealing. Since I happened to think through this essay while running with an injured back, I will use that to illustrate my view on this matter.

When I set out to run, my back began hurting immediately. While I could not be certain that I had a body containing a spine and nerves, no amount of skeptical doubt could make the pain go away—in regards to the pain, it did not matter whether I really had a back or not. That is, in terms of the pain it did not matter whether I was a pained brain in a vat or a pained brain in a runner on the road. In either scenario, I would be in pain and that is what really mattered to me.

As I ran, it seemed that I was covering distance in a three-dimensional world. Since I live in Florida (or what seems to be Florida) I was soon feeling quite warm and had that Florida feel of sticky sweat. I could eventually feel my thirst and some fatigue. Once more, it did not seem to really matter if this was real—whether I was really bathed in sweat or a brain bathed in some sort of nutrient fluid, the run was the same to me. As I ran, I took pains to avoid cars, trees and debris. While I did not know if they were real, I have experience what it is like to be hit by a car (or as if I was hit by a car) and also experience involving falling (or the appearance of falling). In terms of navigating through my run, it did not matter at all whether it was real or not. If I knew for sure that my run was really real for real that would not change the run. If I somehow knew it was all an illusion that I could never escape, I would still run for the sake of the experience of running.

This, of course, might seem a bit odd. After all, when the hero of a story or movie finds out that she is in a virtual reality what usually follows is disillusionment and despair. However, my attitude has been shaped by years of gaming—both tabletop (BattleTech, Dungeons & Dragons, Pathfinder, Call of Cthulhu, and so many more) and video (Zork, Doom, Starcraft, Warcraft, Destiny, Halo, and many more). When I am pretending to be a paladin, the Master Chief, or a Guardian, I know I am doing something that is not really real for real. However, the game can be pleasant and enjoyable or unpleasant and awful. This enjoyment or suffering is just as real as enjoyment or suffering caused by what is supposed to be really real for real—though I believe it is but a game.

If I somehow knew that I was trapped in an inescapable virtual reality, then I would simply keep playing the game—that is what I do. Plus, it would get boring and awful if I stopped playing. If I somehow knew that I was in the really real world for real, I would keep doing what I am doing. Since I might be trapped in just such a virtual reality or I might not, the sensible thing to do is keep playing as if it is really real for real. After all, that is the most sensible option in every case. As such, the reality or lack thereof of the world I think I occupy does not matter at all. The play, as they say, is the thing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds III: The Mind of the Machine

While the problem of other minds is a problem in epistemology (how does one know that another being has/is a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regards to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind and it is certainly interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (with the exception of functionalism) were developed to provide accounts of the minds of living creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of mater.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But, there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the rather serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct, but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way:  mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two way causation. It also seems to have the defect of making the mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weirder than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that the mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism is does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor—just the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regards to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body.

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same requires functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different, but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind would be a rather plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested in order to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds II: Is the Android a Psychopath?

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” As in this essay, there will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and clearly passes the Cartesian test (she uses true language appropriately). But, Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking down doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a rather positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people in order to achieve one’s goals. In the movie, Ava clearly has what might be called Manipulative Intelligence (M.Q.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—thus suggesting she is a psychopath.

While the term “psychopath” gets thrown around quite a bit, it is important to be a bit more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regards to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying.

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it seems rather important to be able to determine whether a machine is a psychopath or not and to do so well before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often quite charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and are able to pass themselves off as normal people.

While Ava is a fictional android, the movie does present a rather effective appeal to intuition by creating a plausible android psychopath. She is able to manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter well worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals might be rather positive. For example, it is easy to imagine a medical or care-giving robot that uses its MQ to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MQ to please its partners. However, these goals might be rather negative—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath or not matters is the same reason why being able to determine if a human is a psychopath or not matters. Roughly put, it is important to know whether or not someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same function (only more reliably) as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in an artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 sort of situation).

In regards to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to an artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings, there would be the possibility that a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since an artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would still remain the question of what is really going on in that mind.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ex Machina & Other Minds I: Setup

The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.

The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Discussing the Shape of Things (that might be) to Come

ThingstocomescifiOne stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.

One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.

This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.

In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Introduction to Philosophy

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Philosopher’s Blog: 2014 Free on Amazon

A-Philosopher's-Blog-2014A Philosopher’s Blog: 2014 Philosophical Essays on Many Subjects will be available as a free Kindle book on Amazon from 12/31/2014-1/4/2015. This book contains all the essays from the 2014 postings of A Philosopher’s Blog. The topics covered range from the moral implications of sexbots to the metaphysics of determinism. It is available on all the various national Amazons, such as in the US, UK, and India.

A Philosopher’s Blog: 2014 on Amazon US

A Philosophers Blog: 2014 on Amazon UK

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

Philosopher’s Carnival No. 146

Hello new friends, philosophers, and likeminded internet creatures. This month TPM is hosting the Philosopher’s Carnival.

Something feels wrong with the state of philosophy today. From whence hast this sense of ill-boding come?

For this month’s Carnival, we shall survey a selection of recent posts that are loosely arranged around the theme of existential threats to contemporary philosophy. I focus on four. Pre-theoretic intuitions seem a little less credible as sources of evidence. Talk about possible worlds seems just a bit less scientific. The very idea of rationality looks as though it is being taken over by cognate disciplines, like cognitive science and psychology. And some of the most talented philosophers of the last generation have taken up arms against a scientific theory that enjoys a strong consensus. Some of these threats are disturbing, while others are eminently solvable. All of them deserve wider attention.

1. Philosophical intuitions

Over at Psychology TodayPaul Thagard argued that armchair philosophy is dogmatic. He lists eleven unwritten rules that he believes are a part of the culture of analytic philosophy. Accompanying each of these dogmas he proposes a remedy, ostensibly from the point of view of the sciences. [Full disclosure: Paul and I know each other well, and often work together.]

Paul’s list is successful in capturing some of the worries that are sometimes expressed about contemporary analytic philosophy. It acts as a bellwether, a succinct statement of defiance. Unfortunately, I do not believe that most of the items on the list hit their target. But I do think that two points in particular cut close to the bone:

3. [Analytic philosophers believe that] People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don’t trust your intuitions.

4. [Analytic philosophers believe that] Thought experiments are a good way of generating intuitive evidence. Natural alternative: use thought experiments only as a way of generating hypotheses, and evaluate hypotheses objectively by considering evidence derived from systematic observations and controlled experiments.

From what I understand, Paul is not arguing against the classics in analytic philosophy. (e.g., Carnap was not an intuition-monger.) He’s also obviously not arguing against the influential strain of analytic philosophers that are descendants of Quine — indeed, he is one of those philosophers. Rather, I think Paul is worried that contemporary analytic philosophers have gotten a bit too comfortable in trusting their pre-theoretic intuitions when they are prompted to respond to cases for the purpose of delineating concepts.

As Catarina Dutilh Novaes points out, some recent commentators have argued that no prominent philosophers have ever treated pre-theoretic intuitions as a source of evidence. If that’s true, then it would turn out that Paul is entirely off base about the role of intuition in philosophy.

Unfortunately, there is persuasive evidence that some influential philosophers have treated some pre-theoretic intuitions as being a source of evidence about the structure of concepts. For example, Saul Kripke (in Naming & Necessity, 1972:p.42) explained that intuitiveness is the reason why there is a distinction between necessity and contingency in the first place: “Some philosophers think that something’s having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of it, myself. I really don’t know, in a way, what more conclusive evidence one can have about anything, ultimately speaking”.

2. Philosophical necessity

Let’s consider another item from Paul’s list of dogmas:

8. There are necessary truths that apply to all possible worlds. Natural alternative: recognize that it is hard enough to figure out what is true in this world, and there is no reliable way of establishing what is true in all possible worlds, so abandon the concept of necessity.

In this passage Paul makes a radical claim. He argues that we should do away with the very idea of necessity. What might he be worried about?

To make a claim about the necessity of something is to make a claim about its truth across all possible worlds. Granted, our talk about possible worlds sounds kind of spooky, but [arguably] it is really just a pragmatic intellectual device, a harmless way of speaking. If you like, you could replace the idea of a ‘possible world’ with a ‘state-space’. When computer scientists at Waterloo learn modal logic, they replace one idiom with another — seemingly without incident.

If possible worlds semantics is just a way of speaking, then it would not be objectionable. Indeed, the language of possible worlds seems to be cooked into the way we reason about things. Consider counterfactual claims, like “If Oswald hadn’t shot Kennedy, nobody else would’ve.” These claims are easy to make and come naturally to us. You don’t need a degree in philosophy to talk about how things could have been, you just need some knowledge of a language and an active imagination.

But when you slow down and take a closer look at what has been said there, you will see that the counterfactual claim involves discussion of a possible (imaginary) world where Kennedy had not been shot. We seem to be talking about what that possible world looks like. Does that mean that this other possible world is real — that we’re making reference to this other universe, in roughly the same way we might refer to the sun or the sky? Well, if so, then that sounds like it would be a turn toward spooky metaphysics.

Hence, some philosophers seem to have gone a bit too far in their enthusiasm for the metaphysics of possible worlds. As Ross Cameron reminds us, David K. Lewis argued that possible worlds are real:

For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with round squares as parts.  And so, to believe in the latter world is to believe in round squares.  And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which could not exist.  In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc.

And to make matters worse, some people even argue that impossible worlds are real, ostensibly for similar reasons. Some people…

…like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to include impossible worlds.

Much like the Red Queen, proponents of this view want to do impossible things before breakfast. The only difference is that they evidently want to keep at it all day long.

Cameron argues that there is a difference between different kinds of impossibility, and that at least one form of impossibility cannot be part of our ontology. If you’re feeling dangerous, you can posit impossible concrete things, e.g., round squares. But you cannot say that there are worlds where “2+2=5” and still call yourself a friend of Lewis:

For Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4.

While Cameron presents us with a cogent rebuttal to the impossibilist, his objection still leaves open the possibility that there are impossible worlds — at least, so long as the impossible worlds involve exotic concrete entities like the square circle and not incoherent abstracta.

So what we need is a scientifically credible account of necessity and possibility. In a whirlwind of a post over at LessWrong, Eliezer Yudkowsky argues that when we reason using counterfactuals, we are making a mixed reference which involves reference to both logical laws and the actual world.

[I]n one sense, “If Oswald hadn’t shot Kennedy, nobody else would’ve” is a fact; it’s a mixed reference that starts with the causal model of the actual universe where [Oswald was a lone agent], and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like ‘six’ for the product of apples on the table, is not actually present anywhere in the universe.

Yudkowsky argues that this is part of what he calls the ‘great reductionist project’ in scientific explanation. For Yudkowsky, counterfactual reasoning is quite important to the project and prospects of a certain form of science. Moreover, claims about counterfactuals can even be true. But unlike Lewis, Yudkowsky doesn’t need to argue that counterfactuals (or counterpossibles) are really real. This puts Yudkowsky on some pretty strong footing. If he is right, then it is hardly any problem for science (cognitive or otherwise) if we make use of a semantics of possible worlds.

Notice, for Yudkowski’s project to work, there has to be such a thing as a distinction between abstracta and concreta in the first place, such that both are the sorts of things we’re able to refer to. But what, exactly, does the distinction between abstract and concrete mean? Is it perhaps just another way of upsetting Quine by talking about the analytic and the synthetic?

In a two-part analysis of reference [here, then here], Tristan Haze at Sprachlogik suggests that we can understand referring activity as contact between nodes belonging to distinct language-systems. In his vernacular, reference to abstract propositions involves the direct comparison of two language-systems, while reference to concrete propositions involves the coordination of systems in terms of a particular object. But I worry that unless we learn more about the causal and representational underpinnings of a ‘language-system‘, there is no principled reason that stops us from inferring that his theory of reference is actually just a comparison of languages. And if so, then it would be well-trod territory.

3. Philosophical rationality

But let’s get back to Paul’s list. Paul seems to think that philosophy has drifted too far away from contemporary cognitive science. He worries that philosophical expertise is potentially cramped by cognitive biases.

Similarly, at LessWrong, Lukeprog worries that philosophers are not taking psychology very seriously.

Because it tackles so many questions that can’t be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn’t: we generally are as “stupid and self-deceiving” as science assumes we are. We’re “predictably irrational” and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one’s rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn’t seem so. I don’t see much Kahneman & Tversky in philosophy syllabi — just light-weight “critical thinking” classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don’t like. So what’s really needed is regular habits training for genuine curiositymotivated cognition mitigation, and so on.

In some sense or other, Luke is surely correct. Philosophers really should be paying close attention to the antecedents of (ir)rationality, and really should be training their students to do exactly that. Awareness of cognitive illusions must be a part of the philosopher’s toolkit.

But does that mean that cognitive science should be a part of the epistemologist’s domain of research? The answers looks controversial. Prompted by a post by Leah LebrescoEli Horowitz at Rust Belt Philosophy argues that we also need to take care that we don’t just conflate cognitive biases with fallacies. Instead, Horowitz argues that we ought to make a careful distinction between cognitive psychology and epistemology. In a discussion of a cognitive bias that Lebresco calls the ‘ugh field’, Horowitz writes:

On its face, this sort of thing looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature… it’s something that’s relevant to producing a good reasoning environmentreviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself.

In principle, Eli’s point is sound. There is, after all, at least a superficial difference between dispositions to (in)correctness, and actual facts about (in)correctness. But even if you think he is making an important distinction, Leah seems to be making a useful practical point about how philosophers can benefit from a change in pedagogy. Knowledge of cognitive biases really should be a part of the introductory curriculum. Development of the proper reasoning environment is, for all practical purposes, of major methodological interest to those who teach how to reason effectively. So it seems that in order to do better philosophy, philosophers must be prepared to do some psychology.

4. Philosophical anti-Darwinism

The eminent philosopher Thomas Nagel recently published a critique of Darwinian accounts of evolution through natural selection. In this effort, Nagel joins Jerry Fodor and Alvin Plantiga, who have also published philosophical worries about Darwinism. The works in this subgenre have by and large been thought to be lacking in empirical and scholarly rigor. This trend has caused a great disturbance in the profession, as philosophical epistemologists and philosophers of science are especially sensitive to ridicule they face from scientists who write in the popular press.

Enter Mohan Matthen. Writing at NewAPPS, Mohan worries that some of the leading lights of the profession are not living up to expectations.

Why exactly are Alvin Plantinga and Tom Nagel reviewing each other? And could we have expected a more dismal intellectual result than Plantinga on Nagel’s Mind and Cosmos in the New Republic? When two self-perceived victims get together, you get a chorus of hurt: For recommending an Intelligent Design manifesto as Book of the Year, Plantinga moans, “Nagel paid the predictable price; he was said to be arrogant, dangerous to children, a disgrace, hypocritical, ignorant, mind-polluting, reprehensible, stupid, unscientific, and in general a less than wholly upstanding citizen of the republic of letters.”

My heart goes out to anybody who utters such a wail, knowing that he is himself held in precisely the same low esteem. My mind, however, remains steely and cold.

Plantinga writes, “Nagel supports the commonsense view that the probability of [life evolving by natural selection] in the time available is extremely low.” And this, he says, is “right on target.” This is an extremely substantive scientific claim—and given Plantinga’s mention of “genetic mutation”, “time available,” etc., it would seem that he recognizes this. So you might hope that he and Nagel had examined the scientific evidence in some detail, for nothing else would justify their assertions on this point. Sadly, neither produces anything resembling an argument for their venturesome conclusion, nor even any substantial citation of the scientific evidence. They seem to think that the estimation of such probabilities is well within the domain of a priori philosophical thought. (Just to be clear: it isn’t.)

Coda

Pre-theoretic intuitions are here to stay, so we have to moderate how we think about their evidential role. The metaphysics of modality cannot be dismissed out of hand — we need necessity. But we also need for the idea of necessity to be tempered by our best scientific practices.

The year is at its nadir. November was purgatory, as all Novembers are. But now December has arrived, and the nights have crowded out the days. And an accompanying darkness has descended upon philosophy. Though the wind howls and the winter continues unabated, we can find comfort in patience. Spring cannot be far off.

Issue No.147 of the Philosopher’s Carnival will be hosted by Philosophy & Polity. See you next year.

About the author

The Republicans’ Epistemic Problem

English: Karl Rove Assistant to the President,...

(Photo credit: Wikipedia)

Epistemology is a branch of philosophy that focuses on knowledge: determining the nature of knowledge, sorting out what we can (and cannot) know and similar concerns. While people often think of epistemology in terms of strange skeptical problems such as the brain–in-the-vat and the Cartesian demon, it actually has rather practical aspects. After all, sorting out what is known from what is merely believed is important for the practical aspects of life. Also a significant portion of critical thinking can be seen in terms of epistemology: determining what justifies believing that a claim as true.

In very rough and ready terms, to know a claim is to believe the claim, for the claim to actually be true and for the belief to be properly justified. As any professional philosopher will tell you, this rough and ready view has been roughly beaten over the years by various clever thinkers. However, for practical purposes this account works fairly well—provided that one takes the proper precautions.

My main purpose is not, however, to do battle over the fine points of an account of knowledge. Rather, my objective is to discuss the Republicans’ epistemic problem to illustrate how politics and epistemology can intersect.

As noted above, a rough account of knowledge involves having a true belief that is properly justified. As might be imagined, the matters of justification and truth can be debated until the cows (if they exist) come home (if it exists). However, a crude view of truth should suffice for my purposes: a claim about the actual world is true when it matches the actual world. As far as justification goes, I will stick with an intuitive notion—that is, that the belief is properly formed and supported. To help give some flesh to this poor definition I will use specific examples where beliefs are not justified.

As I discussed in my essay on politics and alternative reality, political narratives are typically aimed at crafting what amounts to an alternative reality story. This generally involves two types of tales. The first is laying out a negative narrative describing one’s opponents. The second is spinning a positive tale about one’s virtues. While all politicians and pundits play this game, the Republicans seemed to have made the rather serious epistemic error of believing that their fictional narratives expressed justified, true beliefs.

While epistemologists disagree about justification, it seems reasonable to hold that believing a claim because one wants it to be true is not adequate justification. It also seems reasonable to hold that a belief formed by systematically ignoring and misinterpreting available evidence is not justified. That is, it seems reasonable to hold that fallacies do not serve as justification for a claim. Hence, it seems reasonable to hold that beliefs based on such poor reasoning do not meet the standard of knowledge—even if we lack a proper definition of knowledge.

One clear indicator of this was the shock and dismay on the part of conservative pundits such as Laura Ingraham. A bit before the election she said “if you can’t beat Barack Obama with this record, then shut down the party.” Other pundits and spinions expressed incredulity at Obama’s ability to stay ahead of Romney in the polls and they were terribly shocked when Obama won the actual election. This is understandable. On their narrative, Obama is the worst president in history. He has divided the country, brought socialism to America, destroyed jobs, played the race card against all opponents, gone on a worldwide apology tour, weakened America and might be a secret Muslim who was born outside of the United States. Obviously enough, such a terrible person should have been extremely easy to defeat and Americans should have been clamoring if not for Romney, then at least to be rid of Obama. As such, it makes sense why the people who accept the alternative reality in which Obama is all these things (or at least most of them) were so shocked by what actually happened, namely his being re-elected. The Republican epistemic and critical thinking problems in this regard are well presented in Fox’s Megyn Kelly’s question to strategist Karl Rove: “Is this just math that you do as a Republican to make yourself feel better or is it real?”

After Obama’s victory, the conservative politicians, pundits and spinions rushed to provide an explanation for this dire turn of events. Some blame was placed on the Republican party, thus continuing an approach that began long before the election.

Given their epistemic failings, it makes sense that they would believe that the Republican Party is to blame for the failure to beat such an easy opponent. To use an analogy, imagine that fans of a team believe that an opposing team is pathetic but as the game is played, the “pathetic” team gets ahead and stays there. Rather than re-assess the other team, the fans are likely to start blaming their team, the coaches and so on for doing so poorly against such a “pathetic” opponent. However, if the opposing team is not as they imagined, then they have the explanation wrong: they are losing because the other team is better.  Put another way, their team is not playing against the team they think they are playing against—the pathetic team is a product of their minds and not an objective assessment of the actual team.

In the case of Obama, the conservatives and Republicans would be rightfully dismayed if they lost to someone as bad as their idea of Obama. However, they did not run against that alternative Obama. They ran against the actual Obama and he is not as bad as they claim. Hence, it makes sense that they did not do as well as they thought they should.  To be fair, the Democrats also had an Obama narrative that is not an unbiased account of the president.

It also makes sense that they would explain the loss by blaming the voters. As Bill O’Reilly explained things, Obama won because there are not enough white male voters and too many non-white and female voters who want “stuff” from the government. This explanation is hardly surprising. After all Fox News, the main epistemic engine of the Republicans, had been presenting a narrative in which America is divided between the virtuous hard working people and those who just want free stuff. There was also a narrative involving race (as exemplified by the obsessive focus on one Black Panther standing near a Philadelphia polling place) and one involving gender. Rush Limbaugh also contributed significantly to these narratives, especially the gender narrative, with his calling Sandra Fluke a slut. On these narratives, the colored people and women are (or have joined forces with) the people who want free stuff and it is their moral failing that robbed Romney of his rightful victory. However, this narrative fails to be true. While there are some people who want “free stuff”, the reality is rather different from the narrative—as analyzed in some detail by the Baltimore Sun. In response to such actual evidence, the usual reply is to make use of anecdotal evidence in the form of YouTube videos or vague references to someone who just wants free stuff. That is, evidence that is justified is “countered” by unwarranted beliefs based on fallacious reasoning. Ironically, the common reply to the claim that their epistemology is flawed is to simply shovel out more examples of the defective epistemology.

As might be imagined, while the Republicans had a good reason to try to get people to accept their alternative reality as the actual world some of them seem to have truly believed that the alternative is the actual. This had a rather practical impact in that to the degree they believed in this alternative world that isn’t, their strategies and tactics were distorted. After all, when one goes into battle accurate intelligence is vital and distorted information is a major liability. It does seem that some folks became victims of their own distortions and this impacted the election.

People generally tend to want to cling to a beloved narrative, even in the face of overwhelming evidence to the contrary. However, there is a very practical reason for the Republicans to work on their epistemology—if they do not, they keep increasing their odds of losing elections.

 

My Amazon Author Page

Enhanced by Zemanta