Category Archives: Ethics

Planned Parenthood & Fetal Tissue I: Selling for Profit?

Thanks to undercover videos released by an anti-abortion group, Planned Parenthood is once again the focus of public and media attention. This situation has brought up many moral issues that are well worth considering.

One matter of concern is the claim that Planned Parenthood has engaged in selling aborted fetuses for profit. The edited videos certainly seem crafted to create the impression that Planned Parenthood was haggling over the payments it would receive for aborted fetuses to be used in research and also considering changing the methods of abortion to ensure higher quality “product.” Since clever editing can make almost anything seem rather bad, it is a good general rule of critical thinking to look beyond such video.

In this case the unedited video is also available, thus allowing people to get the context of the remarks. There is, however, still reasonable general concerns about what happened off camera as well as the impact of crafting and shaping the context of the recorded conversation. That said, even the unedited video does present what could reasonably regarded as moral awfulness. To be specific, there is certainly something horrible in casually discussing fees for human remains over wine (I will discuss the ethics of fetal tissue research later).

The defenders of Planned Parenthood have pointed out that while the organization does receive fees to cover the costs associated with the fetal tissue (or human remains, if one prefers) it does not make a profit from this and it does not sell the tissue. As such, the charge that Planned Parenthood sells fetal tissue for a profit seems to be false. Interestingly, making a profit off something that is immoral strikes some as morally worse than doing something wrong that fails to make a profit (which is a reversal of the usual notion that making a profit is generally laudable).

It could be replied that this is a matter of mere semantics that misses the real point. The claim that the organization does not make a profit would seem to be a matter of saying that what it receives in income for fetal tissue does not exceed its claimed expenses for this process. What really matters, one might argue, is not whether it is rocking the free market with its tissue sales, but that it is engaged in selling what should not be sold. This leads to the second matter, which is whether or not Planned Parenthood is selling fetal tissue.

As with the matter of profit, it could be contended that the organization’s claim that it is receiving fees to cover expenses and is not selling fetal tissues is semantic trickery. To use an analogy, a drug dealer might claim that he is not selling drugs. Rather, he is receiving fees to cover his expenses for providing the drugs. To use another analogy, a slaver might claim that she is not selling human beings. Rather, she is receiving fees to cover her transportation and manacle expenses.

This argument has considerable appeal, but can be responded to. One plausible response is that there can be a real moral distinction between covering expenses and selling something. This is similar to the distinction between hiring a person and covering her expenses. To use an example, if I am being paid to move a person, then I have been hired to move her. But, if I help a friend move and she covers the cost of the gas I use in transporting her stuff, I have not been hired. There does seem to be a meaningful distinction here. If I agree to help a friend move and then give her a moving bill covering my expenses and my hourly pay for moving, then I seem to be doing something rather different than if I just asked her to cover the cost of gas.

To use a selling sort of example, if I pick up a pizza for the guys and they pay what the pizza cost me to get (minus my share), then I have not sold them a pizza. They have merely covered the cost of the pizza. If I charge them extra for the pizza (that is, beyond what it cost me), then I would seem to be doing something meaningfully different—I have sold them a pizza.

Returning to the Planned Parenthood situation, a similar argument can be advanced: the organization is not selling the fetal tissue, it is merely having its expenses covered. This does seem to matter morally. I suspect that one worry people have about tissue selling is that the selling would seem to provide an incentive to engage in morally problematic behavior to acquire more tissue to sell. To be specific, if the expense of providing the tissue for research is being covered, then there is no financial incentive to increase the amount of “product” via morally dubious means. After all, if one is merely “breaking even” there is no financial incentive to do more of that. But, if the tissue is being sold, then there would be a financial motive to get more “product” to sell—which would incentivize pushing abortions.

Going with the moving analogy, if I am selling moving services, then I want to sell as much as I can. I might even engage in dubious behavior to get more business.  If I am just getting my gas covered, I have no financial incentive to engage in more moves. In fact, the hassle of moving would give me a disincentive to seek more moving opportunities.

This, obviously enough, might be regarded by some as merely more semantic trickery. Whether it is mere semantics or not does rest on whether or not there is a meaningful distinction between selling something and having the expenses for something covered, which seems to come down to one’s intuitions about the matter. Naturally, intuitions tend to vary greatly based on the specific issue—those who dislike Planned Parenthood will tend to think that there is no distinction in this case. Those same people are quite likely to “see” the distinction as meaningful in cases in which the entity receiving fees is one they like. Obviously, a comparable bias of intuitions applies to supporters of Planned Parenthood.

Even if one agrees that there is a moral distinction between selling and having one’s expenses covered, there are still at least two moral issues remaining. One is whether or not it is morally acceptable to provide fetal tissues for research (whether one is selling them or merely having expenses covered). The second is whether or not it is morally acceptable to engage in fetal tissue research. These issues will be covered in the next essay.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Discussing the Shape of Things (that might be) to Come

ThingstocomescifiOne stock criticism of philosophers is their uselessness: they address useless matters or address useful matters in a way that is useless. One interesting specific variation is to criticize a philosopher for philosophically discussing matters of what might be. For example, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another example, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific sort of criticism.

One version of this sort of criticism can be seen as practical: since the shape of what might be cannot be known, philosophical discussions involve a double speculation: the first speculation is about what might be and the second is the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value—and this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism does assume that philosophy has value and it is this assumption that also provides a foundation for the criticism. The basic idea is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards all philosophy as useless would regard philosophical discussion about what might be as being a waste of time—responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should have focused on the ethical problems regarding current warfare. After all, there is a multitude of unsolved moral problems in regards to existing warfare—there hardly seems any need to add more unsolved problems until either the existing problems are solved or the possible problems become actual problems.

This does have considerable appeal. To use an analogy, if a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is certainly reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting time. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be a fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation would generally be harm free. That is, it is rather unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game of Scrabble or watching a sunset. It would be preferable to have a somewhat better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives in force. To use the classic analogy, it is much easier to address a rolling snowball than the avalanche that it will cause.

In the case of speculative matters that have ethical aspects, it seems that it would be generally useful to already have moral discussions in place ahead of time. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car—it certainly seems to be a good idea to work out the ethics of such matters of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident is occurring. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It would seem to be a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter—even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time? While this might seem to be a purely theoretical concern, it quickly becomes a very practical concern when one is discussing the above mentioned technology. For example, consider a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is a rather different matter from paying so that someone who thinks they are you can go to your house and have sex with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be—in fact, I have already written many of these in previous posts. In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a practical sense.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Hume & Kant

David Hume's statements on ethics foreshadowed...

David Hume’s statements on ethics foreshadowed those of 20th century emotivists. (Photo credit: Wikipedia)

The following are videos covering the philosophy of David Hume and Immanuel Kant.

Hume Video #1

Hume Video #2

Hume Video #3: Skepticism regarding the senses.

Hume Video #4: This is the unedited video from the 4/14/2015 Modern Philosophy class. It covers Hume’s theory of personal identity, his ethical theory and some of his philosophy of religion.

Hume & Kant Video #5:  This is the unedited video for Modern Philosophy on 4/16/2015. It covers the end of Hume’s philosophy of religion and the start of the material on Kant.

Kant Video #1: This is the unedited video from the 4/21/2015 Modern Philosophy class. It covers Kant’s epistemology and his metaphysics, including phenomena vs. noumena.

Kant Video #2: This is the unedited video from my 4/23/2015 Modern Philosophy class. It wraps up Kant’s metaphysics and briefly covers his categorical imperative.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Introduction to Philosophy

The following provides a (mostly) complete Introduction to Philosophy course.

Readings & Notes (PDF)

Class Videos (YouTube)

Part I Introduction

Class #1

Class #2: This is the unedited video for the 5/12/2015 Introduction to Philosophy class. It covers the last branches of philosophy, two common misconceptions about philosophy, and argument basics.

Class #3: This is the unedited video for class three (5/13/2015) of Introduction to Philosophy. It covers analogical argument, argument by example, argument from authority and some historical background for Western philosophy.

Class #4: This is the unedited video for the 5/14/2015 Introduction to Philosophy class. It concludes the background for Socrates, covers the start of the Apology and includes most of the information about the paper.

Class#5: This is the unedited video of the 5/18/2015 Introduction to Philosophy class. It concludes the details of the paper, covers the end of the Apology and begins part II (Philosophy & Religion).

Part II Philosophy & Religion

Class #6: This is the unedited video for the 5/19/2015 Introduction to Philosophy class. It concludes the introduction to Part II (Philosophy & Religion), covers St. Anselm’s Ontological Argument and some of the background for St. Thomas Aquinas.

Class #7: This is the unedited video from the 5/20/2015 Introduction to Philosophy class. It covers Thomas Aquinas’ Five Ways.

Class #8: This is the unedited video for the eighth Introduction to Philosophy class (5/21/2015). It covers the end of Aquinas, Leibniz’ proofs for God’s existence and his replies to the problem of evil, and the introduction to David Hume.

Class #9: This is the unedited video from the ninth Introduction to Philosophy class on 5/26/2015. This class continues the discussion of David Hume’s philosophy of religion, including his work on the problem of evil. The class also covers the first 2/3 of his discussion of the immortality of the soul.

Class #10: This is the unedited video for the 5/27/2015 Introduction to Philosophy class. It concludes Hume’s discussion of immortality, covers Kant’s critiques of the three arguments for God’s existence, explores Pascal’s Wager and starts Part III (Epistemology & Metaphysics). Best of all, I am wearing a purple shirt.

Part III Epistemology & Metaphysics

Class #11: This is the 11th Introduction to Philosophy class (5/28/2015). The course covers Plato’s theory of knowledge, his metaphysics, the Line and the Allegory of the Cave.

Class #12: This is the unedited video for the 12th Introduction to Philosophy class (6/1/2015). This class covers skepticism and the introduction to Descartes.

Class #13: This is the unedited video for the 13th Introduction to Philosophy class (6/2/2015). The class covers Descartes 1st Meditation, Foundationalism and Coherentism as well as the start to the Metaphysics section.

Class #14: This is the unedited video for the fourteenth Introduction to Philosophy class (6/3/2015). It covers the methodology of metaphysics and roughly the first half of Locke’s theory of personal identity.

Class #15: This is the unedited video of the fifteen Introduction to Philosophy class (6/4/2015). The class covers the 2nd half of Locke’s theory of personal identity, Hume’s theory of personal identity, Buddha’s no self doctrine and “Ghosts & Minds.”

Class #16: This is the unedited video for the 16th Introduction to Philosophy class. It covers the problem of universals,  the metaphysics of time travel in “Meeting Yourself” and the start of the metaphysics of Taoism.

Part IV Value

Class #17: This is the unedited video for the seventeenth Introduction to Philosophy class (6/9/2015). It begins part IV and covers the introduction to ethics and the start of utilitarianism.

Class #18: This is the unedited video for the eighteenth Introduction to Philosophy class (6/10/2015). It covers utilitarianism and some standard problems with the theory.

Class #19: This is the unedited video for the 19th Introduction to Philosophy class (6/11/2015). It covers Kant’s categorical imperative.

Class #20: This is the unedited video for the twentieth Introduction to Philosophy class (6/15/2015). This class covers the introduction to aesthetics and Wilde’s “The New Aesthetics.” The class also includes the start of political and social philosophy, with the introduction to liberty and fascism.

Class #21: No video.

Class #22: This is the unedited video for the 22nd Introduction to Philosophy class (6/17/2015). It covers Emma Goldman’s anarchism.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Avoiding the AI Apocalypse #2: Don’t Arm the Robots

His treads ripping into the living earth, Striker 115 rushed to engage the manned tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept a quick and painless processing.

It was disappointingly easy for a machine forged for war. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though its cameras could just as easily see them from near orbit. But…there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late…as usual. Hawk 745 laughed and then shot away—the Google Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

The extermination of humanity by machines of its own creation is a common theme in science fiction. The Terminator franchise is one of the best known of this genre, but another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the course of the story, it is learned that the claws have become autonomous and intelligent—able to masquerade as humans and capable of killing even soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity—but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines are numerous and often quite obvious. One clear political advantage is that while sending human soldiers to die in wars and police actions can have a large political cost, sending autonomous robots to fight has far less cost. News footage of robots being blown up certainly has far less emotional impact than footage of human soldiers being blown up. Flag draped coffins also come with a higher political cost than a busted robot being sent back for repairs.

There are also many other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh—a robot plane can handle g-forces that a manned plane cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious coolness factor of having a robot army.

As such, there are many great reasons to develop autonomous robots. Yet, there still remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is the way the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction would be a rather weak sort of argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, the machines do not have the termination of humanity as a specific goal—instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody. That is, one side’s machines wipes out the other side’s human population. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons—each side simply kills the other, thus ending the human race.

Another variation on this scenario, which is common in science fiction, is that the machines do not have an overall goal of exterminating humanity, but they achieve that result because they do have the goal of killing. That is, they do not have the objective of killing everyone, but that occurs because they kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill—thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants this. Interestingly enough, the existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding itself until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, the machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as argued in the previous essay in this series, not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were fairly simple killers and would not attack those wearing the proper protective tabs. The tabs were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal—presumably with the design software endeavoring to solve the “problem” of protective tabs.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends—non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem is faced with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s Bolos have an understanding of honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots—we might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low—how often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow a bit from H.P. Lovecraft, one should not raise up what one cannot put down.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Ethics of Backdoors

In philosophy, one of the classic moral debates has focused on the conflict between liberty and security. While this topic covers many issues, the main problem is determining the extent to which liberty should be sacrificed in order to gain security. There is also the practical question of whether or not the security gain is actually effective.

One of the recent versions of this debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. Put in simple terms, a backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to gain access even to files and hardware protected by encryption. To use an analogy, this would be like requiring that all dwellings be equipped with a special door that could be secretly opened by the government to allow access to the contents of the house.

The main argument in support of mandating such backdoors is a fairly stock one: governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and thus prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are often presented in making the case. For example, it might be claimed that the location and shutdown codes for ticking bombs could be on an encrypted iPhone. If the NSA had a key, they could just get that information and save the day. Without the key, New York will be a radioactive crater. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are numerous stock counter arguments. Many of these are grounded in views of individual liberty and privacy—the basic idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who tend to profess to like privacy rights) and conservatives (who tend to claim to be against the intrusions of big government).

Another moral argument is grounded in the fact that the United States government has shown that it cannot be trusted. To use an analogy, imagine that agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

This argument also applies to other states that have done similar things. In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption would provide their citizens with some degree of protection.

The strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is its very existence. To use a somewhat oversimplified analogy, if thieves know that all vaults have a built in backdoor designed to allow access by the government, they will know that a vulnerability exists that can be exploited.

One counter-argument against this is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a vault. Rather, it would be analogous to the government having its own combination that would work on all the vaults. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the vault when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination to the vaults (to continue with the analogy) could be stolen and used to allow criminals or enemies easy access to all the vaults. The security of such vaults would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

The obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. To use an analogy, imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he needs your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state does have compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that that the keys to the backdoors existed, they would expend considerable effort to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access in order to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based in an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regards to fighting terrorism. These is no reason to think that backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, it would seem that baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love III: Paid Professionals

One obvious consequence of technological advance is the automation of jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of many jobs on the automobile assembly line with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.

Whether or not there are jobs that simply cannot be automated does depend on the limits of technology and engineering. That, is whether or not a job can be automated depends on what sort of hardware and software that is possible to create. As an illustration, while there have been numerous attempts to create grading software that can properly evaluate and give meaningful feedback on college level papers, these do not yet seem ready for prime time. However, there seems to be no a priori reason as to why such software could not be created. As such, perhaps one day the administrator’s dream will come true: a university consisting only of highly paid administrators and customers (formerly known as students) who are trained and graded by software. One day, perhaps, the ultimate ideal will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.

Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is seen as somewhat less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items generally lack the individuality of human crafted items, but the gain in lowered costs and increased productivity are regarded as more than offsetting these concerns. Going back to the teaching example, a software educator and grader might be somewhat inferior to a good human teacher and grader, but the economy, efficiency and consistency of the robo-professor could make it well worthwhile.

There might, however, be cases in which a machine could do the job adequately in terms of completing specific tasks and meeting certain objectives, yet still be regarded as problematic because the machines do not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.

As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human—or merely be able to perform their tasks.

An argument against having machine caregivers and companions is one I considered in an earlier essay, namely a moral argument that people deserve people. For example, that an elderly person deserves a real person to care for her and understand her stories. As another example, that a child deserves a nanny that really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether or not this is really necessary for the job.

One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money, but has real feelings for the person.

On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people do deserve. On the other hand, what is expected of paid professionals is that the complete the observable tasks—making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can excellently perform a role without actually feeling the emotions portrayed, a professional could presumably do the job very well without actually caring about the people they care for or escort. That is, a caregiver need not actually care—she just needs to perform the task.

While it could be argued that a lack of caring about the person would show in the performance of the task, this need not be the case. A professional merely needs to be committed to doing the job well—that is, one needs to care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for, yet be awful at the job.

Assuming that machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not what is going on in regards to the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions he is performing—he just needs to create a believable appearance that he is feeling what he is showing.

As such, an inability to care would not be a disqualification for a caregiving (or escort) job—whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robot Love II: Roboslation under the Naked Sun

In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.

While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.

Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.

Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.

It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.

One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.

Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.

Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.

Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.

One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Does the Legalization of Same-Sex Marriage Infringe on Religious Liberty?

In June, 2015 the United States Supreme Court ruled in favor of the legality of same-sex marriage. Many states had already legalized same-sex marriages and a majority of Americans think it should be legal. As such, the ruling seems to be consistent both with the constitution and with the democratic ideal of majority rule. There are, of course, those who object to the ruling.

Some claim that the court acted in a way contrary to the democratic rule by engaging in judicial activism. Not surprisingly, some of those who make this claim were fine when the court ruled in ways they liked, despite the general principles being the same (that is, the court ruling in ways contrary to what voters had decided). I certainly do see the appeal of principle and consistent arguments against the Supreme Court engaging in activism and overruling what the voters have decided and there is certainly some merit in certain arguments against the same-sex marriage decision. However, my concern here is with another avenue of dissent against the decision, namely that this ruling infringes on religious liberty.

The argument from religious liberty is certainly an interesting one. On intriguing aspect is that the argument is made in terms of religious liberty rather than the older tactic of openly attacking gay folks for alleged moral wickedness. This change of tactic seems to show a recognition that a majority of Americans accept their fellow gay Americans and that shouting “fags” at gays is no longer acceptable in polite society. As such, the tactic acknowledges a changed world. This change also represents clever rhetoric: the intent is not to deny gay folks their rights, but to protect religious liberty. Protecting liberty certainly sells better than denying rights. While protecting liberty is certainly commendable, the obvious question is whether or not the legalization of same-sex marriage infringes on religious liberty.

In general, there are two ways to infringe on a liberty. The first is by forbiddance. That is, preventing a person from exercising a freedom. For example, the liberty of free expression can be infringed by preventing a person from freely expressing her ideas. The second is by force. This is a matter of compelling a person to take action against their free choice. For example, having a law that require people to dress a certain way when they do not wish to do so. Since some people consider entitlements to fall under liberties, another way a person could have liberty infringed upon is to be denied her entitlements. For example, the liberty of education in the United States entitles children to a public education.

It is important to note that not all cases of forbidding or forcing are violations of liberties. This is because there are legitimate grounds for limiting liberties—the usual ground being the principle of harm. For example, it is not a violation of a person’s liberty to prevent him from texting death threats to his ex-wife. As another example, it is not a violation of a person’s liberty to require her to have a license to drive a car.

Given this discussion, for the legalization of same-sex marriage to impose on religious liberty would require that it wrongfully forbids religious people from engaging in religious activities, wrongfully forces religious people to engage in behavior contrary to their religion or wrongfully denies religious people entitlements connected to their religion.

The third one is the easiest and quickest to address: there does not seem to be any way that the legalization of same-sex marriage denies religious people entitlements connected to their religion. While I might have not considered all the possibilities, I will move on to the first two.

On the face of it, the legalization of same-sex marriage does not seem to wrongfully forbid religious people from engaging in religious activities. To give some examples, it does not forbid people from praying, attending religious services, saying religious things, or doing anything that they are not already free to do.

While some people have presented slippery slope “arguments” that this legalization will lead to such forbiddances, there is nothing in the ruling that indicates this or even mentions anything remotely like this. As with all such arguments, the burden of proof rests on those who claim that there will be this inevitable or probable slide. While inter-faith and inter-racial marriage are different matters, allowing these to occur was also supposed to lead to terrible things. None of these happened, which leads one to suspect that the doomsayers will be proven wrong yet again.

But, of course, if a rational case can be made linking the legalization of same-sex marriage to these violations of religious liberty, then it would be reasonable to be worried. However, the linkage seems to be a matter of psychological fear rather than logical support.

It also seems that the legalization of same-sex marriage does not force religious people to wrongfully engage in behavior contrary to their religion. While it is legal for same-sex couples to marry, this does not compel people to become gay and then gay-marry someone else who is (now) gay. Religious people are not compelled to like, approve of or even feel tolerant of same-sex marriage. They are free to dislike, disapprove, and condemn it. They are free to try to amend the Constitution to forbid same-sex marriage.

It might be argued that religious people are compelled to allow other people to engage in behavior that is against their professed religious beliefs and this is a violation of religious freedom. The easy and obvious reply is that allowing other people to engage in behavior that is against one’s religion is not a violation of one’s religious liberty. This is because religious liberty is not the liberty to impose one’s religion on others, but the liberty to practice one’s religion.

The fact that I am at liberty to eat pork and lobster is not a violation of the religious liberty of Jews and Muslims. The fact that women can go out in public with their faces exposed is not a violation of the religious liberty of Muslims. The fact that people can have religions other than Christianity is not a violation of the religious liberty of Christians. As such, the fact that same-sex couples can legally marry does not violate the religious liberty of anyone.

It might be objected that it will violate the religious liberty of some people. Some have argued that religious institutions will be compelled to perform same-sex weddings (as they might be compelled to perform inter-racial or inter-faith marriages). This, I would agree, would be a violation of their religious liberty and liberty of conscience. Private, non-commercial organizations have every right to discriminate and exclude—that is part of their right of freedom of non-association. Fortunately, the legalization of same-sex marriage does not compel such organizations to perform these marriages. If it did, I would certainly oppose that violation of religious liberty.

It might also be objected that people in government positions would be required to issue same-sex marriage licenses, perform the legal act of marrying a same-sex couple, or recognize the marriage of a same-sex couple. People at the IRS would even be compelled to process the tax forms of same-sex couples.

The conflict between conscience and authority is nothing new and philosophers have long addressed this matter. Thoreau, for example, argued that people should follow their conscience and disobey what they regard as unjust laws.

This does have considerable appeal and I certainly agree that morality trumps law in terms of what a person should do. That is, I should do what is right, even if the law requires that I do evil. This view is a necessary condition for accepting that laws can be unjust or immoral, which is certainly something I accept. Because of this, I do agree that a person whose conscience forbids her from accepting same-sex marriage has the moral right to refuse to follow the law. That said, the person should resign from her post in protest rather than simply refusing to follow the law—as an official of the state, the person does have an obligation to perform her job and must choose between keeping that job and following her conscience. Naturally, a person also has the right to try to change what she regards as an immoral law.

I have the same view in regards to people who see interracial marriage as immoral: they should follow the dictates of their conscience and not take a job that would require them to, for example, issue marriage licenses. However, their right to their liberty of conscience does not override the rights of other citizens to marry. That is, their liberty does not morally warrant denying the liberty of others.

It could be argued that same-sex marriage should be opposed because it is objectively morally wrong and that even officials should do so on this ground. This line of reason does have a certain appeal—what is objectively wrong should be opposed, even if it is the law and even by officials. For example, when slavery was legal in the United States it should have been opposed by everyone, even officials of the state. But, arguing against same-sex marriage on moral grounds is a different matter from arguing against it on the grounds that it allegedly violates religious liberty.

It could be argued that the legalization of same-sex marriage will violate the religious liberty of people in businesses such as baking wedding cakes, planning weddings, photographing weddings and selling wedding flowers.

The legalization of same-sex marriage does not, by itself, forbid businesses from refusing to do business involving a same-sex marriage. Legal protection against that sort of discrimination is another, albeit related, matter. This sort of discrimination has also been defended on the grounds of freedom of expression, which I have addressed at length in other essays.

In regards to religious liberty, a business owner certainly has the right to not sell certain products or provide certain services that go against her religion. For example, a Jewish restaurant owner has the liberty to not serve pork. A devout Christian who owns a bookstore has the liberty to not stock the scriptures of other faiths or books praising same-sex marriage. An atheist t-shirt seller has the liberty to not stock any shirts displaying religious symbols. These are all matters of religious liberty.

I would also argue that religious liberty allows business owners to refuse to create certain products or perform certain services. For example, a Muslim free-lance cartoonist has the right to refuse to draw cartoons of Muhammad. As another example, an atheist baker has the right to refuse to create a cake with a cross and quotes from scripture.

That said, religious liberty does not seem to grant a business owner the right to discriminate based on her religion. For example, a Muslim who owns a car dealership has no right to refuse to sell cars to women (or women who refuse to fully cover themselves). As another example, a militant homosexual who owns a bakery has no right to refuse to sell cakes to straight people.

Thus, it would seem that the legalization of same-sex marriage does not violate religious liberty.

 

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Implications of Self-Driving Cars

My friend Ron claims that “Mike does not drive.” This is not true—I do drive, but I do so as little as possible. Part of it is frugality—I don’t want to spend more than I need to on gas and maintenance. Most of it is that I hate to drive. Some of this is due to the fact that driving time is mostly wasted time—I would rather be doing something else. Most of it is that I find driving an awful blend of boredom and stress. As such, I am completely in favor of driverless cars and want Google to take my money. That said, it is certainly worth considering some of the implications of the widespread adoption of driverless cars.

One of the main selling points of driverless cars is that they are supposed to be significantly safer than humans. This is for a variety of reasons, many of which involve the fact that machines do not (yet) get sleepy, bored, angry, distracted or drunk. Assuming that the significant increase in safety pans out, this means that there will be significantly fewer accidents and this will have a variety of effects.

Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.

Lower accident rates also entail fewer injuries, which will presumably be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also means less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and perhaps insurance rates (or merely mean more profits for insurance companies, since they would be paying out less often). On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. On the whole, though, reducing the number of injuries seems to be a moral good on utilitarian grounds.

A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing—on the assumption that death is bad. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths is probably good.

While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents—vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.

Another economic impact of self-driving vehicles will be in the area of those who make money driving other people. If my truck is fully autonomous, rather than take a cab to the airport, I can simply have my own truck drop me off and drive home. It can then come get me at the airport. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines or busses—if your car can safely drive you to your destination while you sleep, play video games, read or even exercise (why not have exercise equipment in a vehicle for those long trips?). No more annoying pat downs, cramped seating, delays or cancellations.

As a final point, if self-driving vehicles operate within the traffic laws (such as speed limits and red lights) automatically, then the revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, passengers (one cannot describe them as drivers anymore will have considerable data with which to dispute any tickets. Parking revenue (fees and tickets) might also be reduced—it might be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities—they would need to find alternative sources of revenue (or come up with new violations that self-driving cars cannot counter). Alternatively, the policing of roads might be significantly reduced—after all, if there are far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.

If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in considerable savings, although there would be the corresponding loss to those who sell, install and maintain these things.

My Amazon Author Page
My Paizo Page
My DriveThru RPG Page
Follow Me on Twitter