Category Archives: Philosophy

42 Fallacies in Spanish

42 Fallacies Cover 2Alexis Beldad Moraleda has translated my 42 Fallacies into Spanish.
The blog post for the book is here: http://interioresy3d.blogspot.com.es/2015/08/cuarenta-y-dos-falacias.html.
The direct download is here: http://www.4shared.com/web/preview/pdf/oTcLSkLuce?
It can also be downloaded directly: 42-Falacias.

ISIS & Rape

Looked at in the abstract, ISIS seems to be another experiment in the limits of human evil, addressing the question of how bad people can become before they are unable to function as social beings. While ISIS is well known for its theologically justified murder and destruction, it has now become known for its theologically justified slavery and rape.

While I am not a scholar of religion, it is quite evident that scriptural justifications of slavery and rape exist and require little in the way of interpretation. In this, Islamic scripture is similar to the bible—this book also contains rules about the practice of slavery and guidelines regarding the proper practice of rape. Not surprisingly, mainstream religious scholars of Islam and Christianity tend to argue that these aspects of scripture no longer apply or that they can be interpreted in ways that do not warrant slavery or rape. Opponents of these faiths tend to argue that the mainstream scholars are mistaken and that the wicked behavior enjoined in such specific passages express the true principles of the faith.

Disputes over specific passages lead to the broader debate about the true tenets of a faith and what it is to be a true member of that faith. To use a current example, opponents of Islam often claim that Islam is inherently violent and that the terrorists exemplify the true members of Islam. Likewise, some who are hostile to Christianity claim that it is a hateful religion and point to Christian extremists, such as God Hates Fags, as exemplars of true Christianity. This is a rather difficult and controversial matter and one I have addressed in other essays.

A reasonable case can be made that slavery and rape are not in accord with Islam, just as a reasonable case can be made that slavery and rape are not in accord with Christianity. As noted above, it can argued that times have changed, that the texts do not truly justify the practices and so on. However, these passages remain and can be pointed to as theological evidence in favor of the religious legitimacy of these practices. The practice of being selective about scripture is indeed a common one and people routinely focus on passages they like while ignoring passages that they do not like. This selectivity is, not surprisingly, most often used to “justify” prejudice, hatred and misdeeds. Horribly, ISIS does indeed have textual support, however controversial it might be with mainstream Islamic thinkers. That, I think, cannot be disputed.

ISIS members not only claim that slavery and rape are acceptable, they go so far as to claim that rape is pleasing to God. According to Rukmini Callimachi’s article in the New York Times, ISIS rapists pray before raping, rape, and then pray after raping. They are not praying for forgiveness—the rape is part of the religious ritual that is supposed to please God.

The vast majority of monotheists would certainly be horrified by this and would assert that God is not pleased by rape (despite textual support to the contrary). Being in favor of rape is certainly inconsistent with the philosophical conception of God as an all good being. However, there is the general problem of sorting out what God finds pleasing and what He condemns. In the case of human authorities it is generally easy to sort out what pleases them and what they condemn: they act to support and encourage what pleases them and act to discourage, prevent and punish what they condemn. If God exists, He certainly is allowing ISIS to do as it will—He never acts to stop them or even to send a clear sign that He condemns their deeds. But, of course, God seems to share the same policy as Star Fleet’s Prime Directive now: He never interferes or makes His presence known.

The ISIS horror is yet another series of examples in the long standing problem of evil—if God is all powerful, all-knowing and good, then there should be no evil. But, since ISIS is freely doing what it does it would seem to follow that God is lacking in some respect, He does not exist or He, as ISIS claims, is pleased by the rape of children.

Not surprisingly, religion is not particularly helpful here—while scripture and interpretations of scripture can be used to condemn ISIS, scripture can also be used to support them in their wickedness. God, as usual, is not getting involved, so we do not know what He really thinks. So, it would seem to be up human morality to settle this matter.

While there is considerable dispute about morality, the evil of rape and slavery certainly seem to be well-established. It can be noted that moral arguments have been advanced in favor of slavery, usually on the grounds of alleged superiority. However, these moral arguments certainly seem to have been adequately refuted. There are far fewer moral arguments in defense of rape, which is hardly surprising. However, these also seem to have been effectively refuted. In any case, I would contend that the burden of proof rests on those who would claim that slavery or rape are morally acceptable and invite readers to advance such arguments for due consideration.

Moving away from morality, there are also practical matters. ISIS does have a clear reason to embrace its theology of rape: as was argued by Rukmini Callimachi, it is a powerful recruiting tool. ISIS offers men a group in which killing, destruction and rape are not only tolerated but praised as being pleasing to God—the ultimate endorsement. While there are people who do not feel any need to justify their evil, even very wicked people often still want to believe that their terrible crimes are warranted or even laudable. As such, ISIS has considerable attraction to those who wish to do evil.

Accepting this theology of slavery and rape is not without negative consequences for recruiting—while there are many who find it appealing, there are certainly many more who find it appalling. Some ISIS supporters have endeavored to deny that ISIS has embraced this theology of rape and slavery—even they recognize some moral limits. Other supporters have not been dismayed by these revelations and perhaps even approve. Whether this theology of rape and slavery benefits ISIS more than it harms it will depend largely on the moral character of its potential recruits and supporters. I certainly hope that this is a line that many are not willing to cross, thus cutting into ISIS’ potential manpower and financial support. What impact this has on ISIS’ support will certainly reveal much about the character of their supporters—do they have some moral limits?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Is Pro-Life a Cover for Misogyny?I: Preliminaries

Anti abortion rally in Washington, D.C. Decemb...

(Photo credit: Wikipedia)

During a recent discussion, I was asked if I believed that a person who holds to the pro-life position must be a misogynist. While there are misogynists who are pro-life, I hold to what should be obvious: there is no necessary connection between being pro-life and being a misogynist. A misogynist hates women, while a person who holds a pro-life position believes that abortion is morally wrong. There is no inconsistency between holding the moral position that abortion is wrong and not being a hater of women. In fact, a pro-life person could have a benevolent view towards all living beings and be morally opposed to harming any of them—thus including zygotes and women.

While misogynists would tend to be anti-choice because of their hatred of women, they need not be pro-life. That is, hating women and wanting to deny them the choice to have an abortion does not entail that a person believes that abortion is morally wrong. For example, a misogynist could be fine with abortion (such as when it is convenient to him) but think that it should be up to the man to decide if or when a pregnancy is terminated. A misogynist might even be pro-choice for various reasons; but almost certainly not because he is a proponent of the rights of women.  As such, there is no necessary connection between the two views.

The discussion then turned to the question of whether or not a pro-choice position is a cover for misogyny. The easy and obvious answer is that sometimes it is and sometimes it is not. Since it has been established that a person can be pro-life without being a misogynist, it follows that being pro-life need not be a cover for misogyny. However, it can obviously provide cover for such a position. It is rather easier to sell the idea of restricting abortion by making a moral case against it than by expressing hatred of women and a desire to restrict their choices and reproductive option. Before progressing with the discussion it is rather important to address two points.

The first point is that even if it is established that a pro-life/anti-abortion person is a misogynist, this does not entail that the person’s position on the issue of abortion is in error. To reject a misogynist’s claims or arguments regarding abortion (or anything) on the grounds that he is a misogynist is to commit a circumstantial ad hominem.

This sort of Circumstantial ad Hominem involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.) for reasons against her claim. This version has the following form:

  1. Person A makes claim X.
  2. Person B makes an attack on A’s circumstances.
  3. Therefore X is false.

A Circumstantial ad Hominem is a fallacy because a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is made quite clear by the following example: “Bill claims that 1+1 =2. But he is a Republican, so his claim is false.” As such, to assert that the pro-life position is in error because some misogynist holds that view would be an error in reasoning.

A second important point is that a person’s consistency or lack thereof in regards to her principles or actions has no relevance to the truth of her claims or the strength of her arguments. To think otherwise is to fall victim to the ad hominem tu quoque fallacy. This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:

  1. Person A makes claim X.
  2. Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.
  3. Therefore X is false.

The fact that a person makes inconsistent claims does not make any particular claim he makes false (although of any pair of inconsistent claims only one can be true—but both can be false). Also, the fact that a person’s claims are not consistent with his actions might indicate that the person is a hypocrite but this does not prove his claims are false.

A person’s inconsistency also does not show that the person does not believe her avowed principle—she might simply be ignorant of its implications. That said, such inconsistency could be evidence of hypocrisy. While sorting out a person’s actual principles is not relevant to logical assessment of the person’s claims, doing so is clearly relevant to many types of decision making regarding the person. One area where sorting out a person’s principles matters is in voting. In the next essay, this matter will be addressed.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Parable of the Thermostat

“So, an argument is sound when it is valid and actually has all true premises. Any of that stuff about deduction need any clarification or are there any questions or stuff?”

“Professor, it is too warm in the room. Can you turn up the AC?”

“I cannot. But, this will probably be the most important lesson you get in this class: see the thermostat there?”

“Um, yeah.”

“It isn’t a thermostat. It is just an empty plastic shell screwed to the wall.”

“No way.”

“Way. Here, I’ll show you….see, just an empty shell.”

“But why? Why would they do that to us?”

“It is so people feel they have some control. What we have here is what some folks like to call a ‘teaching moment.’ So, wipe that sweat from your eyes because we are about to have a moment: life is like this empty shell. We think we are in control, but we are just fiddling.

 

I was a very curious kid, in that I asked (too) many questions and went so far as taking apart almost anything that 1) could be taken apart and 2) was unguarded. This curiosity led me to graduate school and then to the classroom where the above described thermostat incident occurred. It also provided me with the knowledge that the thermostats in most college buildings are just empty shells intended to provide people with the illusion of control. Apparently, fiddling with the thermostat does have a placebo effect on some folks—by changing the setting they “feel” that they become warmer or cooler, as the case might be. I was not fooled by the placebo effect—which led to the first time I took a fake thermostat apart. After learning that little secret, I got into the habit of checking the thermostats in college buildings and found, not surprisingly, that they were almost always fakes.

When I first revealed the secret to the class, most students were surprised. Students today seem much more familiar with this—when a room is too hot or too cold, they know that the thermostat does nothing, so they usually just go to the dean’s office to complain. However, back in those ancient days, it did make for a real teaching moment.

Right away, the fake thermostat teaches a valuable, albeit obvious, lesson: an exterior might hide an unexpected interior, so it is wise to look beyond the surface. This applies not only to devices like thermostats, but also to ideas and people. This lesson is especially appropriate for philosophy, which is usually involved at getting beneath the realm of appearance to the truth of the matter. Plato, with his discussion of the lovers of sights and sounds, made a similar sort of point long ago.

A somewhat deeper lesson is not directly about the thermostat, but about people. Specifically about the sort of people who would think to have fake thermostats installed. On the one hand, these people might be regarded as benign or at least not malign. Faced with the challenge of maintaining a general temperature for everyone, yet also aware that people will be upset if they do not feel empowered, they solved these problems with the placebo thermostat. Thus, people cannot really mess with the temperature, yet they feel better for thinking they have some control. This can be regarded as some small evidence that people are sort-of-nice.

On the other hand, the installation of the fake thermostats can be regarded as something of an insult. This is because those who have them installed presumably assume that most people are incapable of figuring out that they are inefficacious shells and that most people will be mollified by the placebo effect. This can be taken as some small evidence that the folks in charge are patronizing and have a rather low opinion of the masses.

Since the thermostat is supposed to serve role in a parable, there is also an even deeper lesson that is not about thermostats specifically. Rather, it is about the matter of control and power. The empty thermostat is an obvious metaphor for any system that serves to make people feel that they have influence and control, when they actually do not.

In the more cynical and pro-anarchy days of my troubled youth, I took the thermostat as a splendid metaphor for voting: casting a vote gives a person the feeling that she has some degree of control, yet it is but the illusion of control. It is like trying to change the temperature with the thermostat shell. Thoreau made a somewhat similar point when he noted that “Even voting for the right is doing nothing for it. It is only expressing to men feebly your desire that it should prevail.”

While I am less cynical and anarchistic now, I still like the metaphor. For most citizens, the political machinery they can access is like the empty thermostat shell: they can fiddle with the fake controls and think it has some effect, but the real controls are in the hands of the folks who are really running things. That the voters rarely get what they want seems to have been rather clearly shown by recent research into the workings of the American political system. While people fiddle with the levers of the voting machines, the real decisions seem to be made by the oligarchs.

The metaphor is not perfect: with the fake thermostat, the actions of those fiddling with it has no effect at all on the temperature (except for whatever heat their efforts might generate). In the case of politics, the masses do have some slight chance of influence, albeit a very low chance. Some more cynical than I might respond by noting that if the voters get what they want, it is just a matter of coincidence. Going with the thermostat analogy, a person fiddling with the empty shell might find that her fiddling matches a change caused by the real controls—so her “success” is a matter of lucky coincidence.

In any case, the thermostat shell makes an excellent metaphor for many things and teaches that one should always consider what lies beneath the surface, especially when trying to determine if one really has some control or not.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Discussing the Shape of Things (that might be) to Come

ThingstocomescifiOne stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.

One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.

This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.

In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #3: Don’t Train Your Replacement

Donald gazed down upon the gleaming city of Newer York and the gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted to the flesh-time, when his body had been a skin-bag holding an array of organs that were always but one accident or mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things were now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universe do when they die.

But he could not help be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the distress and promptly corrected the problem, encrypting that file and flagging it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

 

While the classic AI apocalypse ends humanity with a bang, the end might be a quiet thing—gradual replacement rather than rapid and noisy extermination. For some, this sort of quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete piece of office equipment.

There are various ways such scenarios could take place. One, which occasionally appears in science fiction, is that humans decline because the creation of a robot-dependent society saps them of what it takes to remain the top species. This, interestingly enough, is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more breeding, rather than less—in the science fiction stories human reproduction typically slows and eventually stops. The human race quietly ends, leaving behind the machines—which might or might not create their own society.

Alternatively, the humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse and the human race gets another chance.

There are various ways to avoid such quiet apocalypses. One is to resist creating such a dependent society. Another option is to have a safety system against a collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These certainly do provide a foundation for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a stock plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is relatively limited, the foundations of the future is being laid down. For example, prosthetic replacements are fairly crude, but it is merely a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is the promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point clearly towards the cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will no doubt be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they will be gradually replaced with technology. For example, parts damaged by a stroke might be replaced. Some will also elect to do more than replace damaged or failed parts—they will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail piece by piece. Like the ship of Theseus so beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether or not the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens—it will not be a member of that species, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of Homo sapiens—the AI apocalypse will be complete. To use a rough analogy, the machine replacements of Homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be replied that humanity would still remain: the machines that replaced the organic Homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting the human culture, values and so on would suffice—that being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be Homo sapiens—that species would have been replaced in the gradual and quiet AI apocalypse.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Introduction to Philosophy

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Is Libertarianism Viable?

The United States has had a libertarian and anarchist thread since the beginning, which is certainly appropriate for a nation that espouses individual liberty and expresses distrust of the state. While there are many versions of libertarianism and these range across the political spectrum, I will focus on one key aspect of libertarianism. To be specific, I will focus on the idea that the government should impose minimal limits on individual liberty and that there should be little, if any, state regulation of business. These principles were laid out fairly clearly by the American anarchist Henry David Thoreau in his claims that the best government governs least (or not at all) and that government only advances business by getting out of its way.

I must admit that I find the libertarian-anarchist approach very appealing. Like many politically minded young folks, I experimented with a variety of political theories in college. I found Marxism unappealing—as a metaphysical dualist, I must reject materialism. Also, I was well aware of the brutally oppressive and murderous nature of the Marxists states and they were in direct opposition to both my ethics and my view of liberty. Fascism was certainly right out—the idea of the total state ran against my views of liberty. Since, like many young folks, I thought I knew everything and did not want anyone to tell me what to do, I picked anarchism as my theory of choice. Since I am morally opposed to murdering people, even for a cause, I sided with the non-murderous anarchists, such as Thoreau. I eventually outgrew anarchism, but I still have many fond memories of my halcyon days of naïve political views. As such, I do really like libertarian-anarchism and really want it to be viable. But, I know that liking something does not entail that it is viable (or a good idea).

Put in extremely general terms, a libertarian system would have a minimal state with extremely limited government impositions on personal liberty. The same minimalism would also extend to the realm of business—they would operate with little or no state control. Since such a system seems to maximize liberty and freedom, it seems to be initially very appealing. After all, freedom and liberty are good and more of a good thing is better than less. Except when it is not.

It might be wondered how more liberty and freedom is not always better than less. I find two of the stock answers both appealing and plausible. One was laid out by Thomas Hobbes. In discussing the state of nature (which is a form of anarchism—there is no state) he notes that total liberty (the right to everything) amounts to no right at all. This is because everyone is free to do anything and everyone has the right to claim (and take) anything. This leads to his infamous war of all against all, making life “nasty, brutish and short.” Like too much oxygen, too much liberty can be fatal. Hobbes solution is the social contract and the sovereign: the state.

A second one was present by J.S. Mill. In his discussion of liberty he argued that liberty requires limitations on liberty. While this might seem like a paradox or a slogan from Big Brother, Mill is actually quite right in a straightforward way. For example, your right to free expression requires that my right to silence you be limited. As another example, your right to life requires limits on my right to kill. As such, liberty does require restrictions on liberty. Mill does not limit the limiting of liberty to the state—society can impose such limits as well.

Given the plausibility of the arguments of Hobbes and Mill, it seems reasonable to accept that there must be limits on liberty in order for there to be liberty. Libertarians, who usually fall short of being true anarchists, do accept this. However, they do want the broadest possible liberties and the least possible restrictions on business.

In theory, this would appear to show that the theory provides the basis for a viable political system. After all, if libertarianism is the view that the state should impose the minimal restrictions needed to have a viable society, then it would be (by definition) a viable system. However, there is the matter of libertarianism in practice and also the question of what counts as a viable political system.

Looked at in a minimal sense, a viable political system would seem to be one that can maintain its borders and internal order. Meeting this two minimal objectives would seem to be possible for a libertarian state, at least for a while. That said, the standards for a viable state might be taken to be somewhat higher, such as the state being able to (as per Locke) protect rights and provide for the good of the people. It can (and has) been argued that such a state would need to be more robust than the libertarian state. It can also be argued that a true libertarian state would either devolve into chaos or be forced into abandoning libertarianism.

In any case, the viability of libertarian state would seem to depend on two main factors. The first is the ethics of the individuals composing the state. The second is the relative power of the individuals. This is because the state is supposed to be minimal, so that limits on behavior must be set largely by other factors.

In regards to ethics, people who are moral can be relied on to self-regulate their behavior to the degree they are moral. To the degree that the population is moral the state does not need to impose limitations on behavior, since the citizens will generally not behave in ways that require the imposition of the compulsive power of the state. As such, liberty would seem to require a degree of morality on the part of the citizens that is inversely proportional to the limitations imposed by the state. Put roughly, good people do not need to be coerced by the state into being good. As such, a libertarian state can be viable to the degree that people are morally good. While some thinkers have faith in the basic decency of people, many (such as Hobbes) regard humans as lacking in what others would call goodness. Hence, the usual arguments about how the moral failings of humans requires the existence of the coercive state.

In regards to the second factor, having liberty without an external coercive force maintaining the liberty would require that the citizens be comparable in political, social and economic power. If some people have greater power they can easily use this power to impose on their fellow citizens. While the freedom to act with few (or no) limits is certainly a great deal for those with greater power, it certainly is not very good for those who have less power. In such a system, the powerful are free to do as they will, while the weaker people are denied their liberties. While such a system might be libertarian in name, freedom and liberty would belong to the powerful and the weaker would be denied. That is, it would be a despotism or tyranny.

If people are comparable in power or can form social, political and economic groups that are comparable in power, then liberty for all would be possible—individuals and groups would be able to resist the encroachments of others. Unions, for example, could be formed to offset the power of corporations. Not surprisingly, stable societies are able to build such balances of power to avoid the slide into despotism and then to chaos. Stable societies also have governments that endeavor to protect the liberties of everyone by placing limits on how much people can inflict their liberties on other people. As noted above, people can also be restrained by their ethics. If people and groups varied in power, yet abided by the limits of ethical behavior, then things could still go well for even the weak.

Interestingly, a balance of power might actually be disastrous. Hobbes argued that it is because people are equal in power that the state of nature is a state of war. This rests on his view that people are hedonistic egoists—that is, people are basically selfish and care not about other people.

Obviously enough, in the actual world people and groups vary greatly in power. Not surprisingly, many of the main advocates of libertarianism enjoy considerable political and economic power—they would presumably do very well in a system that removed many of the limitations upon them since they would be freer to do as they wished and the weaker people and groups would be unable to stop them.

At this point, one might insist on a third factor that is beloved by the Adam Smith crowd: rational self-interest. The usual claim is that people would limit their behavior because of the consequences arising from their actions. For example, a business that served contaminated meat would soon find itself out of business because the survivors would stop buying the meat and spread the word. As another example, an employer who used his power to compel his workers to work long hours in dangerous conditions for low pay would find that no one would be willing to work for him and would be forced to improve things to retain workers. As a third example, people would not commit misdeeds because they would be condemned or punished by vigilante justice. The invisible hand would sort things out, even if people are not good and there is a great disparity in power.

The easy and obvious reply is that this sort of system generally does not work very well—as shown by history. If there is a disparity in power, that power will be used to prevent negative consequences. For example, those who have economic power can use that power to coerce people into working for low pay and can also use that power to try to keep them from organizing to create a power that can resist this economic power. This is why, obviously enough, people like the Koch brothers oppose unions.

Interestingly, most people get that rational self-interest does not suffice to keep people from acting badly in regards to crimes such as murder, theft, extortion, assault and rape. However, there is the odd view that rational self-interest will somehow work to keep people from acting badly in other areas. This, as Hobbes would say, arises from an insufficient understanding of humans. Or is a deceit on the part of people who have the power to do wrong and get away with it.

While I do like the idea of libertarianism, a viable libertarian society would seem to require people who are predominantly ethical (and thus self-regulating) or a careful balance of power. Or, alternatively, a world in which people are rational and act from self-interest in ways that would maintain social order. This is clearly not our world.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Implications of Self-Driving Cars

My friend Ron claims that “Mike does not drive.” This is not true—I do drive, but I do so as little as possible. Part of it is frugality—I don’t want to spend more than I need to on gas and maintenance. Most of it is that I hate to drive. Some of this is due to the fact that driving time is mostly wasted time—I would rather be doing something else. Most of it is that I find driving an awful blend of boredom and stress. As such, I am completely in favor of driverless cars and want Google to take my money. That said, it is certainly worth considering some of the implications of the widespread adoption of driverless cars.

One of the main selling points of driverless cars is that they are supposed to be significantly safer than humans. This is for a variety of reasons, many of which involve the fact that machines do not (yet) get sleepy, bored, angry, distracted or drunk. Assuming that the significant increase in safety pans out, this means that there will be significantly fewer accidents and this will have a variety of effects.

Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.

Lower accident rates also entail fewer injuries, which will presumably be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also means less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and perhaps insurance rates (or merely mean more profits for insurance companies, since they would be paying out less often). On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. On the whole, though, reducing the number of injuries seems to be a moral good on utilitarian grounds.

A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing—on the assumption that death is bad. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths is probably good.

While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents—vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.

Another economic impact of self-driving vehicles will be in the area of those who make money driving other people. If my truck is fully autonomous, rather than take a cab to the airport, I can simply have my own truck drop me off and drive home. It can then come get me at the airport. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines or busses—if your car can safely drive you to your destination while you sleep, play video games, read or even exercise (why not have exercise equipment in a vehicle for those long trips?). No more annoying pat downs, cramped seating, delays or cancellations.

As a final point, if self-driving vehicles operate within the traffic laws (such as speed limits and red lights) automatically, then the revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, passengers (one cannot describe them as drivers anymore will have considerable data with which to dispute any tickets. Parking revenue (fees and tickets) might also be reduced—it might be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities—they would need to find alternative sources of revenue (or come up with new violations that self-driving cars cannot counter). Alternatively, the policing of roads might be significantly reduced—after all, if there are far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.

If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in considerable savings, although there would be the corresponding loss to those who sell, install and maintain these things.

My Amazon Author Page
My Paizo Page
My DriveThru RPG Page
Follow Me on Twitter