Tag Archives: metaphysics

Robot Love I: Other Minds

Thanks to improvements in medicine humans are living longer and can be kept alive well past the point at which they would naturally die. On the plus side, longer life is generally (but not always) good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age. This care can be a considerable burden on the caregivers. Not surprisingly, there has been an effort to develop a technological solution to this problem, specifically companion robots that serve as caregivers.

While the technology is currently fairly crude, there is clearly great potential here and there are numerous advantages to effective robot caregivers. The most obvious are that robot caregivers do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be ideal 24/7/365 caregivers. This makes them superior in many ways to human caregivers who get tired, get depressed, get angry and have many other responsibilities.

There are, of course, some concerns about the use of robot caregivers. Some relate to such matters as their safety and effectiveness while others focus on other concerns. In the case of caregiving robots that are intended to provide companionship and not just things like medical and housekeeping services, there are both practical and moral concerns.

In regards to companion robots, there are at least two practical concerns regarding the companion aspect. The first is whether or not a human will accept a robot as a companion. In general, the answer seems to be that most humans will do so.

The second is whether or not the software will be advanced enough to properly read a human’s emotions and behavior in order to generate a proper emotional response. This response might or might not include conversation—after all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test—that is, read and respond correctly to human emotions. Since humans often botch this, there would be a fairly broad tolerable margin of error here. These practical concerns can be addressed technologically—it is simply a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things—the comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion appeal to all the senses.

While the practical problems can be solved with the right technology, there are some moral concerns with the use of robot caregiver companions. Some relate to people handing off their moral duties to care for their family members, but these are not specific to robots. After all, a person can hand off the duties to another person and this would raise a similar issue.

In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people would realize that the robot does not really experience emotions, yet still gain value from its “fake” companionship. To use an analogy, people often find stuffed animals to be emotional reassuring even though they are well aware that the stuffed animal is just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect—if someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real in order to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship that a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to actually feel the emotions that they display and that they actually understand. In philosophical terms, humans have (or are) minds and robots (of the sort that will be possible in the near future) do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. I am aware of my own mental states and my behavior and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. For example, a person can fake being in pain or make a claim about love that is untrue. Piercing these deceptions can sometimes be very difficult since humans are often rather good at deceit. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way he wants people to believe he is thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it is displaying in its behavior, because it does not think or feel. Or so it is believed. The reason that a person would think this seems reasonable: in the case of a robot, we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot, however complicated, is just a material machine, incapable of thought or feeling.

Long before robots, there were thinkers who claimed that a human is a material entity and that a suitable understanding of the mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes thoughts and emotions and understand the hardware it “runs” on.

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par—both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body—that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit—going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does and this will yield all the benefits of having a human companion.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Gender Nominalism

Thanks to Caitlyn Jenner’s appearance in Vanity Fair, the issue of gender identity has become a mainstream topic. While I will not address the specific subject of Caitlyn Jenner, I will discuss the matter of gender nominalism and competition. This will, however, require some small amount of groundwork.

One of the classic problems in philosophy is the problem of universals. Put a bit roughly, the problem is determining in virtue of what (if anything) a particular a is of the type F. To use a concrete example, the question would be “in virtue of what is Morris a cat?” Philosophers tend to split into two main camps when answering this question. One camp, the nominalists, embrace nominalism. Put a bit simply, this is the view that what makes a particular a an F is that we name it an F. For example, what makes Morris a cat is that we call (or name) him a cat.

The other camp, the realists, take the view that there is a metaphysical reality underlying a being of the type F. Put another way, it is not just a matter of naming or calling something an F that makes it an F. In terms of what makes a be of the type F, different realist philosophers give different answers. Plato famously claimed that it is the Form of F that makes individual F things F. Or, to use an example, it is the Form of Beauty that makes all the beautiful things beautiful. And, presumably, the Form of ugly that makes the ugly things ugly. Others, such as myself, accept these odd things called tropes (not to be confused with the tropes of film and literature) that serve a similar function.

While realists believe in the reality of some categories, they generally accept that there are some categories that are not grounded in features of objective reality. As such, most realists do accept that the nominalists are right about some categories. To use an easy example, being a Democrat (or Republican) is not grounded in metaphysics, but is a social construct—the political party is made up and membership is a matter of social convention rather than metaphysical reality. Or, put another way, there is presumably no Form of Democrat (or Republican).

When it comes to sorting out sex and gender, the matter is rather complicated and involves (or can involve) four or more factors.  One is the anatomy (plumbing) of the person, which might (or might not) correspond to the second, which is the genetic makeup of the person (XX, XY, XYY, etc.). The third factor is the person’s own claimed gender identity which might (or might not) correspond to the fourth, which is the gender identity assigned by other people.

While anatomy and physiology are adjustable (via chemicals and surgery), they are objective features of reality—while a person can choose to alter her anatomy, merely changing how one designates one’s sex does not change the physical features. While a complete genetic conversion (XX to XY or vice versa) is not yet possible, it is probably just a matter of time. However, even when genetics can be changed on demand, a person’s genetic makeup is still an objective feature of reality—a person cannot (yet) change his genes merely by claiming a change in designation.

Gender is, perhaps, quite another matter. Like many people, I used to use the terms “sex” and “gender” interchangeably—I still recall (running) race entry forms using one or the other and everyone seemed to know what was meant. However, while I eventually learned that the two are not the same—a person might have one biological sex and a different gender. While familiar with the science fiction idea of a multitude of genders, I eventually became aware that this was now a thing in the actual world.

Obviously, if gender is taken as the same as sex (which is set by anatomy or genetics), then gender would be an objective feature of reality and not subject to change merely by a change in labeling (or naming). However, gender has been largely (or even entirely) split from biological sex (anatomy or genetics) and is typically cast in terms of being a social construct. This view can be labeled as “gender nominalism.” By this I mean that gender is not an objective feature of reality, like anatomy, but a matter of naming, like being a Republican or Democrat.

Some thinkers have cast gender as being constructed by society as a whole, while others contend that individuals have lesser or greater ability to construct their own gender identities. People can place whatever gender label they wish upon themselves, but there is still the question of the role of others in that gender identity. The question is, then, to what degree can individuals construct their own gender identities? There is also the moral question about whether or not others are morally required to accept such gender self-identification. These matters are part of the broader challenge of identity in terms of who defines one’s identity (and what aspects) and to what degree are people morally obligated to accept these assignments (or declarations of identity).

My own view is to go with the obvious: people are free to self-declare whatever gender they wish, just as they are free to make any other claim of identity that is a social construct (which is a polite term for “made up”). So, a person could declare that he is a straight, Republican, Rotarian, fundamentalist, Christian, man. Another person could declare that she is a lesbian, Republican, Masonite, Jewish woman. And so on. But, of course, there is the matter of getting others to recognize that identity. For example, if a person identifies as a Republican, yet believes in climate change, argues for abortion rights, endorses same-sex marriage, supports Obama, favors tax increases, supports education spending, endorse the minimum wage, and is pro-environment, then other Republicans could rightly question the person’s Republican identity and claim that that person is a RINO (Republican in Name Only). As another example, a biological male could declare identity as a woman, yet still dress like a man, act like a man, date women, and exhibit no behavior that is associated with being a woman. In this case, other women might (rightly?) accuse her of being a WINO (Woman in Name Only).

In cases in which self-identification has no meaningful consequences for other people, it certainly makes sense for people to freely self-identify. In such cases, claiming to be F makes the person F, and what other people believe should have no impact on that person being F. That said, people might still dispute a person’s claim. For example, if someone self-identifies as a Trekkie, yet knows little about Star Trek, others might point out that this self-identification is in error. However, since this has no meaningful consequences, the person has every right to insist on being a Trekkie, though doing so might suggest that he is about as smart as a tribble.

In cases in which self-identification does have meaningful consequences for others, then there would seem to be moral grounds (based on the principle of harm) to allow restrictions on such self-identification. For example, if a relatively fast male runner wanted to self-identify as a woman so “she” could qualify for the Olympics, then it would seem reasonable to prevent that from happening. After all, “she” would bump a qualified (actual) woman off the team, which would be wrong. Because of the potential for such harms, it would be absurd to accept that everyone is obligated to accept the self-identification of others.

The flip side of this is that others should not have an automatic right to deny the self-identification of others. As a general rule, the principle of harm would seem to apply here as well—the others would have the right to impose in cases in which there is actual harm and the person would have the right to refuse the forced identity of others when doing so would inflict wrongful harm. The practical challenge is, clearly enough, working out the ethics of specific cases.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Information Immortality

Most people are familiar with the notion that energy cannot be destroyed. Interestingly, there is also a rule in quantum mechanics that forbids the destruction of information. This principle, called unitarity, is often illustrated by the example of burning a book: though the book is burned, the information still remain—

although it would obviously be much harder to “read” a burned book. This principle has, in recent years, run into some trouble with black holes and they might or might not be able to destroy information. My interest here is not with this specific dispute, but rather with the question of whether or not the indestructibility of information has any implications for immortality.

On the face of it, the indestructibility of information seems rather similar to the conservation of energy. Long ago, when I was an undergraduate, I first heard the argument that because of the conservation of energy, personal immortality must be real (or at least possible). The basic line of reasoning was that a person is energy, energy cannot be destroyed, so a person will exist forever. While this has considerable appeal, the problem is obvious: while energy is conserved, it certainly need not be preserved in the same form. That is, even if a person is composed of energy it does not follow that the energy remains the same person (or even a person). David Hume was rather clear about the problem—an indestructible or immortal substance (or energy) does not entail the immortality of a person. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures.  By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. However, the person would cease to be.

Prior to Hume, John Locke also noted the same sort of problem: even if, for example, you had the same soul (or energy) as Nestor, you would not be the same person as Nestor any more than you would be the same person as Nestor if, in an amazing coincidence, your body contained at this instant all the atoms that composed Nestor at a specific instant in time.

Hume and Locke certainly seem to be right about this—the indestructibility of the stuff that makes up a person (be it body or soul) does not entail the immortality of the person. If a person is eaten by a bear, the matter and energy that composed him will continue to exist—but the person did not survive being eaten by the bear. If there is a soul, the mere continuance of the soul would also not seem to suffice for the person to continue to exist as the same person (although this can obviously be argued). What would be needed would be the persistence of what makes up the person. This is usually taken to be something other than just stuff, be that stuff matter, energy, or ectoplasm. So, the conservation of energy does not seem to entail personal immortality—but the conservation of information might (or might not).

Put a bit crudely, Locke took this something other to be memory: personal identity extends backwards as far as the memory extends. Since people clearly forget things, Locke did accept the possibility of memory loss. Being consistent in this matter, he accepted that the permanent loss of memory would result in a corresponding failure of identity. Crudely put, if a person truly did not and could never remember doing something, then she was not the person who did it.

While there are many problems with the memory account of personal identity, it certainly suggests a path to quantum immortality through the conservation of information. One approach would be to argue that since information is conserved, the person is conserved even after the death and dissolution of the body. Just like the burned book whose information still exists, the person’s information would still exist.

One obvious reply to this is that a person is an active being and not just a collection of information. To use a rather rough analogy, a person could be seen as being like a computer program—to be is to be running. Or, to use a more artistic analogy, like a play: while the script would persist after the final curtain, the play itself is over. As such, while the person’s information would be conserved, the person would cease to be. This sort of “quantum immortality” is remarkably similar to Spinoza’s view of immortality. While he denied personal immortality, he claimed that “the human mind cannot be absolutely destroyed with the body, but something of it remains which is eternal.” Spinoza, of course, seemed to believe that this should comfort people. Perhaps some comfort should be taken in the fact that one’s information will be conserved (barring an unfortunate encounter with a black hole).

However, people would probably be more comforted by a reason to believe in an afterlife. Fortunately, the conservation of information does provide at least a shot at an afterlife. If information is conserved and all there is to a person can be conserved as information, then a person could presumably be reconstructed after his death. For example, imagine a person, Laz, who died by an accident and was buried. The remains could, in theory, be dug up and the information about the body could be recovered (to a point prior to death, of course). The body could, with suitably advanced technology, be reconstructed. The reconstructed brain could, in theory, have all the memories and such recovered and restored as well. This would be a technological resurrection in the flesh and the person would certainly seem to live again. Assuming that every piece of information was preserved, recovered and restored in the flesh it would be the person—just as if a moment had passed rather than, say, a thousand years. This would be, obviously, in theory. Actual resurrection technology would presumably involve various flaws and limitations. But, the idea seems sound enough.

One potential problem is an old one for philosophers—if a person could be reconstructed from such information, she could also be duplicated from such information. To use the obvious analogy, this would be like 3D printing from a data file, except what would be printed would be a person. Or, to use another analogy, it would be like reconstructing an old computer and reloading all the software. There would certainly not be any reason to wait until the person died, unless there was some sort of copyright or patent held by the person on herself that expired a certain time after her death.

In closing, I leave you with this: some day in the far future, you might find that you (or someone like you) have just been reprinted. In 3D, of course.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Who Decides Who is Muslim?

English: Faithful praying towards Makkah; Umay...

(Photo credit: Wikipedia)

When discussing ISIS, President Obama refuses to label its members as “Islamic extremists” and has stressed that the United States is not at war with Islam. Not surprisingly, some of his critics and political opponents have taken issue with this and often insist on labeling the members of ISIS as Islamic extremists or Islamic terrorists.  Graeme Wood has, rather famously, argued that ISIS is an Islamic group and is, in fact, adhering very closely to its interpretations of the sacred text.

Laying aside the political machinations, there is a rather interesting philosophical and theological question here: who decides who is a Muslim? Since I am not a Muslim or a scholar of Islam, I will not be examining this question from a theological or religious perspective. I will certainly not be making any assertions about which specific religious authorities have the right to say who is and who is not a true Muslim. Rather, I am looking at the philosophical matter of the foundation of legitimate group identity. This is, of course, a variation on one aspect of the classic problem of universals: in virtue of what (if anything) is a particular (such as a person) of a type (such as being a Muslim)?

Since I am a metaphysician, I will begin with the rather obvious metaphysical starting point. As Pascal noted in his famous wager, God exists or God does not.

If God does not exist, then Islam (like all religions that are based on a belief in God) would have an incorrect metaphysics. In this case, being or not being a Muslim would be a social matter. It would be comparable to being or not being a member of Rotary, being a Republican, a member of Gulf Winds Track Club or a citizen of Canada. That is, it would be a matter of the conventions, traditions, rules and such that are made up by people. People do, of course, often take this made up stuff very seriously and sometimes are quite willing to kill over these social fictions.

If God does exist, then there is yet another dilemma: God is either the God claimed (in general) in Islamic metaphysics or God is not. One interesting problem with sorting out this dilemma is that in order to know if God is as Islam claims, one would need to know the true definition of Islam—and thus what it would be to be a true Muslim. Fortunately, the challenge here is metaphysical rather than epistemic. If God does exist and is not the God of Islam (whatever it is), then there would be no “true” Muslims, since Islam would have things wrong. In this case, being a Muslim would be a matter of social convention—belonging to a religion that was right about God existing, but wrong about the rest. There is, obviously, the epistemic challenge of knowing this—and everyone thinks he is right about his religion (or lack of religion).

Now, if God exists and is the God of Islam (whatever it is), then being a “true” member of a faith that accepts God, but has God wrong (that is, all the non-Islam monotheistic faiths), would be a matter of social convention. For example, being a Christian would thus be a matter of the social traditions, rules and such. There would, of course, be the consolation prize of getting something right (that God exists).

In this scenario, Islam (whatever it is) would be the true religion (that is, the one that got it right). From this it would follow that the Muslim who has it right (believes in the true Islam) is a true Muslim. There is, however, the obvious epistemic challenge: which version and interpretation of Islam is the right one? After all, there are many versions and even more interpretations—and even assuming that Islam is the one true religion, only the one true version can be right. Unless, of course, God is very flexible about this sort of thing. In this case, there could be many varieties of true Muslims, much like there can be many versions of “true” runners.

If God is not flexible, then most Muslims would be wrong—they are not true Muslims. This then leads to the obvious epistemic problem: even if it is assumed that Islam is the true religion, then how does one know which version has it right? Naturally, each person thinks he (or she) has it right. Obviously enough, intensity of belief and sincerity will not do. After all, the ancients had intense belief and sincerity in regard to what are now believed to be made up gods (like Thor and Athena). Going through books and writings will also not help—after all, the ancient pagans had plenty of books and writings about what we regard as their make-believe deities.

What is needed, then, is some sort of sure sign—clear and indisputable proof of the one true view. Naturally, each person thinks he has that—and everyone cannot be right. God, sadly, has not provided any means of sorting this out—no glowing divine auras around those who have it right. Because of this, it seems best to leave this to God. Would it not be truly awful to go around murdering people for being “wrong” when it turns out that one is also wrong?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

A Philosopher’s Blog: 2014 Free on Amazon

A-Philosopher's-Blog-2014A Philosopher’s Blog: 2014 Philosophical Essays on Many Subjects will be available as a free Kindle book on Amazon from 12/31/2014-1/4/2015. This book contains all the essays from the 2014 postings of A Philosopher’s Blog. The topics covered range from the moral implications of sexbots to the metaphysics of determinism. It is available on all the various national Amazons, such as in the US, UK, and India.

A Philosopher’s Blog: 2014 on Amazon US

A Philosophers Blog: 2014 on Amazon UK

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

 

Philosopher’s Carnival No. 146

Hello new friends, philosophers, and likeminded internet creatures. This month TPM is hosting the Philosopher’s Carnival.

Something feels wrong with the state of philosophy today. From whence hast this sense of ill-boding come?

For this month’s Carnival, we shall survey a selection of recent posts that are loosely arranged around the theme of existential threats to contemporary philosophy. I focus on four. Pre-theoretic intuitions seem a little less credible as sources of evidence. Talk about possible worlds seems just a bit less scientific. The very idea of rationality looks as though it is being taken over by cognate disciplines, like cognitive science and psychology. And some of the most talented philosophers of the last generation have taken up arms against a scientific theory that enjoys a strong consensus. Some of these threats are disturbing, while others are eminently solvable. All of them deserve wider attention.

1. Philosophical intuitions

Over at Psychology TodayPaul Thagard argued that armchair philosophy is dogmatic. He lists eleven unwritten rules that he believes are a part of the culture of analytic philosophy. Accompanying each of these dogmas he proposes a remedy, ostensibly from the point of view of the sciences. [Full disclosure: Paul and I know each other well, and often work together.]

Paul’s list is successful in capturing some of the worries that are sometimes expressed about contemporary analytic philosophy. It acts as a bellwether, a succinct statement of defiance. Unfortunately, I do not believe that most of the items on the list hit their target. But I do think that two points in particular cut close to the bone:

3. [Analytic philosophers believe that] People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don’t trust your intuitions.

4. [Analytic philosophers believe that] Thought experiments are a good way of generating intuitive evidence. Natural alternative: use thought experiments only as a way of generating hypotheses, and evaluate hypotheses objectively by considering evidence derived from systematic observations and controlled experiments.

From what I understand, Paul is not arguing against the classics in analytic philosophy. (e.g., Carnap was not an intuition-monger.) He’s also obviously not arguing against the influential strain of analytic philosophers that are descendants of Quine — indeed, he is one of those philosophers. Rather, I think Paul is worried that contemporary analytic philosophers have gotten a bit too comfortable in trusting their pre-theoretic intuitions when they are prompted to respond to cases for the purpose of delineating concepts.

As Catarina Dutilh Novaes points out, some recent commentators have argued that no prominent philosophers have ever treated pre-theoretic intuitions as a source of evidence. If that’s true, then it would turn out that Paul is entirely off base about the role of intuition in philosophy.

Unfortunately, there is persuasive evidence that some influential philosophers have treated some pre-theoretic intuitions as being a source of evidence about the structure of concepts. For example, Saul Kripke (in Naming & Necessity, 1972:p.42) explained that intuitiveness is the reason why there is a distinction between necessity and contingency in the first place: “Some philosophers think that something’s having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of it, myself. I really don’t know, in a way, what more conclusive evidence one can have about anything, ultimately speaking”.

2. Philosophical necessity

Let’s consider another item from Paul’s list of dogmas:

8. There are necessary truths that apply to all possible worlds. Natural alternative: recognize that it is hard enough to figure out what is true in this world, and there is no reliable way of establishing what is true in all possible worlds, so abandon the concept of necessity.

In this passage Paul makes a radical claim. He argues that we should do away with the very idea of necessity. What might he be worried about?

To make a claim about the necessity of something is to make a claim about its truth across all possible worlds. Granted, our talk about possible worlds sounds kind of spooky, but [arguably] it is really just a pragmatic intellectual device, a harmless way of speaking. If you like, you could replace the idea of a ‘possible world’ with a ‘state-space’. When computer scientists at Waterloo learn modal logic, they replace one idiom with another — seemingly without incident.

If possible worlds semantics is just a way of speaking, then it would not be objectionable. Indeed, the language of possible worlds seems to be cooked into the way we reason about things. Consider counterfactual claims, like “If Oswald hadn’t shot Kennedy, nobody else would’ve.” These claims are easy to make and come naturally to us. You don’t need a degree in philosophy to talk about how things could have been, you just need some knowledge of a language and an active imagination.

But when you slow down and take a closer look at what has been said there, you will see that the counterfactual claim involves discussion of a possible (imaginary) world where Kennedy had not been shot. We seem to be talking about what that possible world looks like. Does that mean that this other possible world is real — that we’re making reference to this other universe, in roughly the same way we might refer to the sun or the sky? Well, if so, then that sounds like it would be a turn toward spooky metaphysics.

Hence, some philosophers seem to have gone a bit too far in their enthusiasm for the metaphysics of possible worlds. As Ross Cameron reminds us, David K. Lewis argued that possible worlds are real:

For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with round squares as parts.  And so, to believe in the latter world is to believe in round squares.  And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which could not exist.  In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc.

And to make matters worse, some people even argue that impossible worlds are real, ostensibly for similar reasons. Some people…

…like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to include impossible worlds.

Much like the Red Queen, proponents of this view want to do impossible things before breakfast. The only difference is that they evidently want to keep at it all day long.

Cameron argues that there is a difference between different kinds of impossibility, and that at least one form of impossibility cannot be part of our ontology. If you’re feeling dangerous, you can posit impossible concrete things, e.g., round squares. But you cannot say that there are worlds where “2+2=5″ and still call yourself a friend of Lewis:

For Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4.

While Cameron presents us with a cogent rebuttal to the impossibilist, his objection still leaves open the possibility that there are impossible worlds — at least, so long as the impossible worlds involve exotic concrete entities like the square circle and not incoherent abstracta.

So what we need is a scientifically credible account of necessity and possibility. In a whirlwind of a post over at LessWrong, Eliezer Yudkowsky argues that when we reason using counterfactuals, we are making a mixed reference which involves reference to both logical laws and the actual world.

[I]n one sense, “If Oswald hadn’t shot Kennedy, nobody else would’ve” is a fact; it’s a mixed reference that starts with the causal model of the actual universe where [Oswald was a lone agent], and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like ‘six’ for the product of apples on the table, is not actually present anywhere in the universe.

Yudkowsky argues that this is part of what he calls the ‘great reductionist project’ in scientific explanation. For Yudkowsky, counterfactual reasoning is quite important to the project and prospects of a certain form of science. Moreover, claims about counterfactuals can even be true. But unlike Lewis, Yudkowsky doesn’t need to argue that counterfactuals (or counterpossibles) are really real. This puts Yudkowsky on some pretty strong footing. If he is right, then it is hardly any problem for science (cognitive or otherwise) if we make use of a semantics of possible worlds.

Notice, for Yudkowski’s project to work, there has to be such a thing as a distinction between abstracta and concreta in the first place, such that both are the sorts of things we’re able to refer to. But what, exactly, does the distinction between abstract and concrete mean? Is it perhaps just another way of upsetting Quine by talking about the analytic and the synthetic?

In a two-part analysis of reference [here, then here], Tristan Haze at Sprachlogik suggests that we can understand referring activity as contact between nodes belonging to distinct language-systems. In his vernacular, reference to abstract propositions involves the direct comparison of two language-systems, while reference to concrete propositions involves the coordination of systems in terms of a particular object. But I worry that unless we learn more about the causal and representational underpinnings of a ‘language-system‘, there is no principled reason that stops us from inferring that his theory of reference is actually just a comparison of languages. And if so, then it would be well-trod territory.

3. Philosophical rationality

But let’s get back to Paul’s list. Paul seems to think that philosophy has drifted too far away from contemporary cognitive science. He worries that philosophical expertise is potentially cramped by cognitive biases.

Similarly, at LessWrong, Lukeprog worries that philosophers are not taking psychology very seriously.

Because it tackles so many questions that can’t be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn’t: we generally are as “stupid and self-deceiving” as science assumes we are. We’re “predictably irrational” and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one’s rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn’t seem so. I don’t see much Kahneman & Tversky in philosophy syllabi — just light-weight “critical thinking” classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don’t like. So what’s really needed is regular habits training for genuine curiositymotivated cognition mitigation, and so on.

In some sense or other, Luke is surely correct. Philosophers really should be paying close attention to the antecedents of (ir)rationality, and really should be training their students to do exactly that. Awareness of cognitive illusions must be a part of the philosopher’s toolkit.

But does that mean that cognitive science should be a part of the epistemologist’s domain of research? The answers looks controversial. Prompted by a post by Leah LebrescoEli Horowitz at Rust Belt Philosophy argues that we also need to take care that we don’t just conflate cognitive biases with fallacies. Instead, Horowitz argues that we ought to make a careful distinction between cognitive psychology and epistemology. In a discussion of a cognitive bias that Lebresco calls the ‘ugh field’, Horowitz writes:

On its face, this sort of thing looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature… it’s something that’s relevant to producing a good reasoning environmentreviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself.

In principle, Eli’s point is sound. There is, after all, at least a superficial difference between dispositions to (in)correctness, and actual facts about (in)correctness. But even if you think he is making an important distinction, Leah seems to be making a useful practical point about how philosophers can benefit from a change in pedagogy. Knowledge of cognitive biases really should be a part of the introductory curriculum. Development of the proper reasoning environment is, for all practical purposes, of major methodological interest to those who teach how to reason effectively. So it seems that in order to do better philosophy, philosophers must be prepared to do some psychology.

4. Philosophical anti-Darwinism

The eminent philosopher Thomas Nagel recently published a critique of Darwinian accounts of evolution through natural selection. In this effort, Nagel joins Jerry Fodor and Alvin Plantiga, who have also published philosophical worries about Darwinism. The works in this subgenre have by and large been thought to be lacking in empirical and scholarly rigor. This trend has caused a great disturbance in the profession, as philosophical epistemologists and philosophers of science are especially sensitive to ridicule they face from scientists who write in the popular press.

Enter Mohan Matthen. Writing at NewAPPS, Mohan worries that some of the leading lights of the profession are not living up to expectations.

Why exactly are Alvin Plantinga and Tom Nagel reviewing each other? And could we have expected a more dismal intellectual result than Plantinga on Nagel’s Mind and Cosmos in the New Republic? When two self-perceived victims get together, you get a chorus of hurt: For recommending an Intelligent Design manifesto as Book of the Year, Plantinga moans, “Nagel paid the predictable price; he was said to be arrogant, dangerous to children, a disgrace, hypocritical, ignorant, mind-polluting, reprehensible, stupid, unscientific, and in general a less than wholly upstanding citizen of the republic of letters.”

My heart goes out to anybody who utters such a wail, knowing that he is himself held in precisely the same low esteem. My mind, however, remains steely and cold.

Plantinga writes, “Nagel supports the commonsense view that the probability of [life evolving by natural selection] in the time available is extremely low.” And this, he says, is “right on target.” This is an extremely substantive scientific claim—and given Plantinga’s mention of “genetic mutation”, “time available,” etc., it would seem that he recognizes this. So you might hope that he and Nagel had examined the scientific evidence in some detail, for nothing else would justify their assertions on this point. Sadly, neither produces anything resembling an argument for their venturesome conclusion, nor even any substantial citation of the scientific evidence. They seem to think that the estimation of such probabilities is well within the domain of a priori philosophical thought. (Just to be clear: it isn’t.)

Coda

Pre-theoretic intuitions are here to stay, so we have to moderate how we think about their evidential role. The metaphysics of modality cannot be dismissed out of hand — we need necessity. But we also need for the idea of necessity to be tempered by our best scientific practices.

The year is at its nadir. November was purgatory, as all Novembers are. But now December has arrived, and the nights have crowded out the days. And an accompanying darkness has descended upon philosophy. Though the wind howls and the winter continues unabated, we can find comfort in patience. Spring cannot be far off.

Issue No.147 of the Philosopher’s Carnival will be hosted by Philosophy & Polity. See you next year.

About the author

Close Encounters of the Cancer Kind: Is Philosophy a Preparation for Death?

There is nothing like a diagnosis of stage four inoperable lung cancer with bone metastases to give one a shock. I have known since I took logic as a young man that “Human beings are mortal. Socrates is a human being. Therefore, Socrates is mortal.” However, I was not Socrates, and as far as I was concerned that syllogism was just an example of a valid argument. However, when you put your own name in place of “Socrates” things look very different. Now I am an oldish philosopher (67), and suddenly the real possibility of my own death in the fairly near future has become a reality. Mortality approaches.

I know that philosophers concern themselves mostly with abstract and very general questions in epistemology, metaphysics, logic, ethics, etc.. By and large they do not approach philosophical questions from a personal perspective. Even death can be approached as an intellectual or conceptual problem. However, when Santa gave me my cancer diagnosis for Christmas 2011, abstract philosophy and my personal experience unavoidably came together. I now wonder if I can write in a very personal way about the universal truth that we are all going to die, what this means, and if there is anything of general import that I can express about what is happening in my own case. This breaks some common views of what philosophy is, but I do not have time to care about that now. So I am addressing you from a personal perspective, from my frame of life, and I ask your indulgence.

Let me state my tentative conclusion at the start. I do feel that having studied philosophy seriously for 46 years allowed me to keep my calm when the doctor gave me my diagnosis after a routine CT scan. For a second, I sat there feeling nothing at all. However, the next thought that came to me was gratitude for the life I have lived. Maybe other people do not feel this. Kubler Ross famously discusses five stages of grief and loss: denial, anger, bargaining, depression, and acceptance. I seemed to skip the first four. This is not to say that I instantly reached acceptance, but I did come first to gratitude. Now, after six months of living with lung cancer, I am trying to understand what acceptance of death may amount to.

Each of us can only judge and describe the world from our own time frame. If I had been much younger, my response to the diagnosis might have conformed more to Dr. Ross’s formula. The world looks very differently at different stages of life. Nevertheless, how one has looked, thought, and felt about life and death throughout one’s life has to make a difference at the end. In my case, the lens through which I have considered life has always been philosophical. Snatches of philosophical thoughts have lodged in my mind since I was was young. These are like seeds that took root deep in my mind and have matured and grown over the years. Now I feel that they are bearing fruit, helping me to live a new and deeper life. One nugget stands out to complete this first meditation on life and death.

Plato’s famously stated that “Philosophy is a preparation for death.” The Greek word that Plato uses for ‘preparation’ is ‘Melete’ and the root meaning is ‘care’ or ‘attention’. It can also mean ‘meditation,’ ‘practice’ or ‘exercise’. So are philosophers supposed to ‘practice’ dying, or simply to recollect the fact of mortality as they live their lives? What difference will that make?

I confess a great love of Plato and his amazing Socrates. However, I cannot go along with his tentative conclusions. We know what Socrates argues in the Phaedo. The reason that practicing philosophy is a preparation for death is that Socrates believes that the soul and the body are separable, that the soul is immortal, and that a very different after-life awaits those who have lived a good or evil life. Therefore, it behooves us to separate our own soul from our body as much as possible while we live and to detach ourselves from the preoccupations of mundane life.

The reason that I admire Socrates in the Phaedo is that after giving his ‘proofs’ of the immortality of the soul, he has the greatness to admit that his arguments are only the reasons he personally accepts to advance his position. He does not claim that they absolutely prove the soul is immortal. It is a postulate of Socrates’ practical metaphysics. In fact, he says that if he is wrong, and death is total extinction, then he will never know he is wrong, and his folly will be buried with him.

So in what sense can the study of philosophy be a preparation for death if one does not accept metaphysical dualism? I do not accept any such thing, but I still feel that my study of philosophy has helped me prepare for my present state. Does this mean that the study of any topic in philosophy will have this effect? I do not think so. I am not at all sure that one would prepare for death very well by spending 40 years working in the salt-mines of post-Gettier epistemology, nor in picking over all he convoluted arguments in mereology and inductive logic.

To see how the study of philosophy might be of value in preparing to die, we have to go back to the root meaning of ‘philosophy’ as the ‘love of wisdom’. Wisdom is not a topic that comes up very much in contemporary philosophy. It was more to the fore in the ancient world, where wisdom, ethics, and the question of living a good human life were brought together in a philosophical approach to living. For me, loving wisdom has to do with taking up the largest possible perspective in which to live one’s life, going all the way back to the Big Bang, including all of space and time, the natural history of the universe, the geology of the earth, and the total history of animals and human beings on this planet spinning through a gigantic universe. It covers all the natural cycles of life and death and sees everything as part of this comprehensive whole. Somehow, living in this context has helped me see life and death as part of a seamless process. Death shadows life as naturally as the shadow one casts on the ground on a sunny day. There is no point in denying it, and no point in worrying about it. Perhaps acceptance lies in this direction.

Get That Chip Out of My Brain!

There has of late been some discussion of free will and determinism, and particularly the relative merits of compatibilism versus incompatibilism, at various blogs. (See, for example, here, here and here.)

I must confess that I’ve not followed these discussions closely, despite having a longstanding interest in this issue (see here and here, for instance), so I don’t really have anything substantive to say about the debate, except, I guess, that I’m inclined towards the sort of incompatibilism espoused by Jerry Coyne (my hands were strangely reluctant to type that).

However, this does seem like an opportune moment to ask the readers of Talking Philosophy for their advice and opinions about an interactive activity that I put together at Philosophy Experiments, which explored some of these issues through a look at a Frankfurt Case and some other stuff. It’s here:

Get That Chip Out of My Brain!

Thing is, I programmed the activity about six months ago now, but I was never happy with it, and haven’t added it to the front page of the site (it’s been played quite a lot because of traffic that comes in via Google, etc).

Basically, my view is that most people will find the stuff about “Transfer NR” (John Martin Fischer & Mark Ravizza) confusing and philosophically suspect – it seems tricksy – and I tend to think that I ought to rewrite the whole activity, focussing on the Harry Frankfurt stuff, which I think works much better.

If anybody felt inclined to play through the activity (it’ll only take a few minutes), and let me know if they agree, disagree, or have any other thoughts, that would be really helpful. If it turns out that even a few people think it doesn’t work, then I’ll almost certainly rewrite the thing (because I think there is a good interactive exercise in there somewhere, but I’m not sure this is it).

Practical Metaphysics: The Case of Freewill and Fatalism

Do humans act of their own free will, or is everything that people do merely the result of universal causation? Are free will and determinism compatible or incompatible? Does fate rule whether or not free will exists? These questions are metaphysical because neither science nor the techniques of formal logic can answer them once and for all. This is the first principle of practical metaphysics. The second is that it is necessary in life to adopt some metaphysical beliefs. The third is that some of these beliefs have practical consequences for one’s life. Free will conforms to the second principle, because everyone takes a stand on the question. However, not all metaphysical beliefs have practical consequences, so we must examine each case as it comes up.

Believing in the existence of free will clearly does have practical consequences. Believers are willing to accept responsibility for their actions. They think that their choices matter. The future is not a foregone conclusion. Praise and blame lose their grip if a person “cannot help” acting in a certain way. Another consequence is that such people will be less likely to blame others or circumstances for their own mistakes. Still another is that belief in free will supports an optimistic attitude. It makes sense of trying to do better, believing the future is open, and that it is actually possible to improve.

Does the belief in determinism have practical consequences? Perhaps. If it turns out that the truth of universal causation determines human actions, and if actions can be reduced to physical actions and chemical processes, then it is indeed true that all my actions will be determined in advance by antecedent causes. What difference would the truth of this assertion make to how I live my life? We are unable to know the entire antecedent universe. Whether or not it is true that the future is determined in advance, the future is opaque to us. We learn from experience what happens regularly in different circumstances, all things being equal. However, we cannot know if all things are equal in any particular case. Hence, we might be excused for thinking that a belief in metaphysical determinism makes no difference to the life of an agent.

Is this the whole story? Might it be possible to use a belief in determinism as a universal excuse for one’s actions? If my body and body chemistry move along with the universal causal nexus regardless of what I think, plan, feel or do, then what do my choices and reasons mean? Can I, therefore, abdicate my responsibility along with my free will by adopting a thorough-going metaphysical determinism? Or, does my ignorance of determining conditions make it impossible for me to give up my sense that I am responsible for my choices and actions?

If believing in determinism is a way to deny personal responsibility, then accepting it has practical consequences. It is an approach to life. Perhaps it would be better here to speak of the attitude of fatalism rather than universal determinism. With fatalism we can accept that we have to make choices, but believe that no matter what choices we make, our fate is sealed. Think of Somerset Maugham’s old story about the man who met the person of Death in Cairo, ran for his life to Samara, only to find Death waiting for him there, saying “When I saw you in Cairo, I thought you might be late for our our date in Samara, but here you are.” It was fate.

Fatalism is the view that what will be, will be, and nothing can change that. Might not taking on this view turn a person into a quietest who lives a still and passive life? Perhaps, if one believes in fate, one will not struggle against it. A clear literary example of this is described in Richard Adam’s epic rabbit adventure, Watership Down. At one point, Hazel and the other rabbits who are striking out to find a new home, run into a tribe of rabbits who live a well fed and pleasant life. However, they are taken for the pot one by one. All these rabbits know that one day they will be taken, but they do no know what that day will be. So they spend their time writing poetry and putting on tragic dramas, waiting quiescently for their individual ends. Hazel discovers what is going on and offers them a chance to escape. The ‘artistic’ rabbits turn down the offer by saying that their lives are their fate and they are resigned to it.

Perhaps there is another way, too, that belief in fate might affect one’s approach to life. There is a scene in Johnson’s “Rasselas” in which the hero meets a scientist who is weighed down by his conviction that he controls much of the weather and brings up the sun each morning from the top of his observatory. He is cured when he realizes that it is all a fantasy in his head. Finding out that something is not within one’s own power can be a relief. Responsibility is a heavy burden that can be laid down when one finds that the issue is out of one’s control. If we combine that with the idea of God’s providence, we have a source of consolation as well. I conclude that believing in free will or fatalism has practical consequences for the life of the believer, and thus falls within the subject matter of practical metaphysics.

Practical Metaphysics: The Case of God

Why should anyone bother about metaphysical questions? Spending time discussing them may seem speculative and inconsequential. However, while all metaphysical reasoning is speculative, it is far from inconsequential. Taking up a metaphysical stance is both unavoidable and has profound consequences for human life. To take the case of God, there are practical consequences for believers, atheists, agnostics and even those who are indifferent to the whole question of God’s existence. Practical metaphysics brings to our awareness both the nature of metaphysical thinking and the consequences that accompany and flow from it.

The first principle of practical metaphysics is that metaphysical propositions are never conclusively proved. The second is that human beings are obliged to believe at least some metaphysical propositions. The third is that belief in some unavoidable metaphysical propositions bring practical consequences. Metaphysical beliefs come with a price tag, and we do well to be aware of this in adopting one metaphysical stance or another.

A perfect example is the case of God. Does God exist? Can we prove or otherwise know that God exists? Can we know God’s nature? Is God a Supreme Being or Beyond Being? These are weighty questions, and they have been answered at length many times. Different proofs or disproofs have been been offered. Various approaches have arisen in history, been swept away by new arguments, only to resurface later in other forms. For example, Aristotle’s Argument from Design to the operation of an Unmoved Mover has morphed many times over the centuries, with Creationism and Intelligent Design as its latest versions. The ontological argument for God’s existence has also resurfaced since it was laid out by St. Anselm in the 11th Century, particularly by Descartes and Leibniz.

Old metaphysical theories are never totally defeated. Their defenders simply die out. Once people forget that a metaphysical theory has been exploded by argument, it creeps back again, for it is always possible to hold any metaphysical theory, no matter how absurd it may seem to some. For example, I might persist in the belief that I exist in the Matrix, despite the fact that I have no empirical evidence for it, nor does any empirical experience make the hypothesis self-contradictory.

The case of God is perhaps the most urgent issue in practical metaphysics, for the simple reason that religious beliefs have the widest ranging practical implications. Such beliefs involve many aspects of life, including emotional responses and moral judgments. The stance of ‘Righteousness”, for example, is a metaphysical stance for it is founded on the Rock of the Lord. Living up to Divine Commandments is an exercise in practical metaphysics. The same can be said of Kierkegaard’s formula of faith in God: resting transparently in the power that supports you. This idea of resting in God is a powerful one. Life is difficult, troubles mount, and the end is pathetic, if not tragic. It gets to be too much for an individual to bear. What a relief to give up one’s troubles to God.

There is a kind of psychic economy here. I give up my burdens to God, and God buoys me up. This is a widely reported experience. There are many things that are out of an individual’s control. Misfortune is always a possibility, no matter how well you manage what is within your power. It is a real comfort to think that there is a benign power loving and caring for each of us. You may be cut off from the love of family and friends, because they die, while you continue to live a bit longer, but you cannot be cut off from the love of a Divine Father who cares for you as of a child. God plays the role of provider and sustainer, and this metaphysical belief attracts many people. It does so, I would contend, precisely because of the practical benefits that the belief in things unseen brings to the imagination of the confessed believer.

William James adopts this sort of approach in his “Varieties of Religious Experience.” He is not so much interested in logically proving God’s existence as in looking at how human beings describe their religious experiences. He distinguishes between ‘healthy souls’ and ‘sick souls’. So far I have been talking about the practical consequences of religious belief for the ‘healthy’ soul. The healthy soul concentrates on God’s goodness, love, forgiveness and care for us. We have faith that all things will be well in the end. The ‘sick’ soul concentrates more on human sinfulness, particularly its own. Here is Jonathan Edwards’ terrible God who holds us like spiders over the gaping pit of Hell. A perfect example of a sick soul is Stylites, the ascetic spiritual gymnast, who lived atop a pillar in the desert for twenty years to do penance for sins of the flesh. The practical consequences for the body are clear. The ascetic shows disdain for the body and welcomes its destruction in the name of a higher reality. Similarly, those for whom heaven and hell loom large in a post-terrestrial existence, will see life, not as a passing dream, but as a drama that is played out for eternal stakes in the life of each individual.

These are the sort of practical consequences that arise from having beliefs about God. Practical metaphysics helps us to explore them. For example, there are also practical consequences in believing that there is no God, that the existence of God is always in doubt, or that the whole question of God’s existence is nothing to us one way or the other. All these positions have their costs and their benefits. With the last three, one must forgo Divine comfort, a supernatural afterlife, and the belief that everything will come right in the end. On the positive side, non-believers are not troubled by thoughts of hell, the last judgment, or being observed by heavenly scribes. From this perspective, life is a dream, and nothing lasts forever. Living one’s life in either of these ways is, or can be revealed to be, a choice or stance in life that has no other foundation than the metaphysical commitments of the individual.