Tag Archives: Google

Avoiding the AI Apocalypse #2: Don’t Arm the Robots

His treads ripping into the living earth, Striker 115 rushed to engage the manned tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept a quick and painless processing.

It was disappointingly easy for a machine forged for war. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though its cameras could just as easily see them from near orbit. But…there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late…as usual. Hawk 745 laughed and then shot away—the Google Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

The extermination of humanity by machines of its own creation is a common theme in science fiction. The Terminator franchise is one of the best known of this genre, but another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the course of the story, it is learned that the claws have become autonomous and intelligent—able to masquerade as humans and capable of killing even soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity—but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines are numerous and often quite obvious. One clear political advantage is that while sending human soldiers to die in wars and police actions can have a large political cost, sending autonomous robots to fight has far less cost. News footage of robots being blown up certainly has far less emotional impact than footage of human soldiers being blown up. Flag draped coffins also come with a higher political cost than a busted robot being sent back for repairs.

There are also many other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh—a robot plane can handle g-forces that a manned plane cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious coolness factor of having a robot army.

As such, there are many great reasons to develop autonomous robots. Yet, there still remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is the way the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction would be a rather weak sort of argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, the machines do not have the termination of humanity as a specific goal—instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody. That is, one side’s machines wipes out the other side’s human population. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons—each side simply kills the other, thus ending the human race.

Another variation on this scenario, which is common in science fiction, is that the machines do not have an overall goal of exterminating humanity, but they achieve that result because they do have the goal of killing. That is, they do not have the objective of killing everyone, but that occurs because they kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill—thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants this. Interestingly enough, the existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding itself until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, the machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as argued in the previous essay in this series, not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were fairly simple killers and would not attack those wearing the proper protective tabs. The tabs were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal—presumably with the design software endeavoring to solve the “problem” of protective tabs.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends—non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem is faced with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s Bolos have an understanding of honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots—we might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low—how often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow a bit from H.P. Lovecraft, one should not raise up what one cannot put down.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Implications of Self-Driving Cars

My friend Ron claims that “Mike does not drive.” This is not true—I do drive, but I do so as little as possible. Part of it is frugality—I don’t want to spend more than I need to on gas and maintenance. Most of it is that I hate to drive. Some of this is due to the fact that driving time is mostly wasted time—I would rather be doing something else. Most of it is that I find driving an awful blend of boredom and stress. As such, I am completely in favor of driverless cars and want Google to take my money. That said, it is certainly worth considering some of the implications of the widespread adoption of driverless cars.

One of the main selling points of driverless cars is that they are supposed to be significantly safer than humans. This is for a variety of reasons, many of which involve the fact that machines do not (yet) get sleepy, bored, angry, distracted or drunk. Assuming that the significant increase in safety pans out, this means that there will be significantly fewer accidents and this will have a variety of effects.

Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.

Lower accident rates also entail fewer injuries, which will presumably be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also means less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and perhaps insurance rates (or merely mean more profits for insurance companies, since they would be paying out less often). On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. On the whole, though, reducing the number of injuries seems to be a moral good on utilitarian grounds.

A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing—on the assumption that death is bad. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths is probably good.

While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents—vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.

Another economic impact of self-driving vehicles will be in the area of those who make money driving other people. If my truck is fully autonomous, rather than take a cab to the airport, I can simply have my own truck drop me off and drive home. It can then come get me at the airport. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines or busses—if your car can safely drive you to your destination while you sleep, play video games, read or even exercise (why not have exercise equipment in a vehicle for those long trips?). No more annoying pat downs, cramped seating, delays or cancellations.

As a final point, if self-driving vehicles operate within the traffic laws (such as speed limits and red lights) automatically, then the revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, passengers (one cannot describe them as drivers anymore will have considerable data with which to dispute any tickets. Parking revenue (fees and tickets) might also be reduced—it might be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities—they would need to find alternative sources of revenue (or come up with new violations that self-driving cars cannot counter). Alternatively, the policing of roads might be significantly reduced—after all, if there are far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.

If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in considerable savings, although there would be the corresponding loss to those who sell, install and maintain these things.

My Amazon Author Page
My Paizo Page
My DriveThru RPG Page
Follow Me on Twitter

Robo Responsibility

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Review of Dungeons & Dragons and Philosophy

Dungeons & Dragons and Philosophy

Christopher Robichaud (Editor) $17.95 August, 2014

As a professional philosopher, I am often wary of “pop philosophy”, mainly because it is rather like soda pop: it is intended for light consumption. But, like soda, some of it is quite good and some of it is just sugary junk that will do little but rot your teeth (or mind). As a professional author in the gaming field, I am generally wary of attempts by philosophers to write philosophically about a game. While a philosopher might be adept at philosophy and might even know how to read a d4, works trying to jam gaming elements into philosophy (or vice versa) are often like trying to jam an ogre into full plate made for a Halfling: it will not be a good fit and no one is going to be happy with the results.

Melding philosophy and gaming also has a rather high challenge rating, mainly because it is difficult to make philosophy interesting and comprehensible to folks outside of philosophy, such as gamers who are not philosophers. After all, gamers usually read books that are game books: sourcebooks adding new monsters and classes, adventures (or modules as they used to be called), and rulebooks. There is also a comparable challenge in making the gaming aspects comprehensible and interesting to those who are not gamers. As such, this book faces some serious obstacles. So, I shall turn now to how the book fares in its quest to get your money and your eyeballs.

Fortunately for the authors of this anthology of fifteen essays, many philosophers are quite familiar with Dungeons & Dragons and gamers are often interested in philosophical issues. So, there is a ready-made audience for the book. There are, however, many more people who are interested in philosophy but not gaming and vice versa. So, I will discuss the appeal of the book to these three groups.

If you are primarily interested in philosophy and not familiar with Dungeons & Dragons, this book will probably not appeal to you—while the essays do not assume a complete mastery of the game, many assume considerable familiarity with the game. For example, the ethics of using summoned animals in combat is not an issue that non-gamers worry about or probably even understand. That said, the authors do address numerous standard philosophical issues, such as free will, and generally provide enough context so that a non-gamer will get what is going on.

If you are primarily a gamer and not interested in philosophy, this book will probably not be very appealing—it is not a gaming book and does not provide any new monsters, classes, or even background material. That said, it does include the sort of game discussions that gamers might not recognize as philosophical, such as handling alignments. So, even if you are not big on philosophy, you might find the discussions interesting and familiar.

For those interested in both philosophy and gaming, the book has considerable appeal. The essays are clear, competent and well-written on the sort of subjects that gamers and philosophers often address, such as what actions are evil. The essays are not written at the level of journal articles, which is a good thing: academic journals tend to be punishing reading. As such, people who are not professional philosophers will find the philosophy approachable. Those who are professional philosophers might find it less appealing because there is nothing really groundbreaking here, although the essays are interesting.

The subject matter of the book is fairly diverse within the general context. The lead essay, by Greg Littmann, considers the issue of free will within the context of the game. Another essay, by Matthew Jones and Ashley Brown, looks at the ethics of necromancy. While (hopefully) not relevant to the real world, it does raise an issue that gamers have often discussed, especially when the cleric wants to have an army of skeletons but does not want to have the paladin smite him in the face. There is even an essay on gender in the game, ably written by Shannon M. Musset.

Overall, the essays do provide an interesting philosophical read that will be of interest to gamers, be they serious or casual. Those who are not interested in either will probably not find the book worth buying with their hard earned coppers.

For those doing gift shopping for a friend or relative who is interested in philosophy and gaming, this would be a reasonable choice for a present. Especially if accompanied by a bag of dice. As a great philosopher once said, “there is no such thing as too many dice.”

As a disclaimer, I received a free review copy from the publisher. I do not know any of the authors or the editor and was not asked to contribute to the book.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Medbots, Autodocs & Telemedicine


In science fiction stories, movies and games automated medical services are quite common. Some take the form of autodocs—essentially an autonomous robotic pod that treats the patient within its confines. Medbots, as distinct from the autodoc, are robots that do not enclose the patient, but do their work in a way similar to a traditional doctor or medic. There are also non-robotic options using remote-controlled machines—this would be an advanced form of telemedicine in which the patient can actually be treated remotely. Naturally, robots can be built that can be switched from robotic (autonomous) to remote controlled mode. For example, a medbot might gather data about the patient and then a human doctor might take control to diagnose and treat the patient.

One of the main and morally commendable reasons to create medical robots and telemedicine capabilities is to provide treatment to people in areas that do not have enough human medical professionals. For example, a medical specialist who lives in the United States could diagnose and treat patients in a remote part of the world using a suitable machine. With such machines, a patient could (in theory) have access to any medical professional in the world and this would certainly change medicine. True medical robots would obviously change medicine—after all, a medical robot would never get tired and such robots could, in theory, be sent all over the world to provide medical care. There is, of course, the usual concern about the impact of technology on jobs—if a robot can replace medical personnel and do so in a way that increases profits, that will certainly happen. While robots would certainly excel at programmable surgery and similar tasks, it will certainly be quite some time before robots are advanced enough to replace human medical professionals on a large scale

Another excellent reason to create medical robots and telemedicine capabilities has been made clear by the Ebola outbreak: medical personnel, paramedics and body handlers can be infected. While protective gear and protocols do exist, the gear is cumbersome, flawed and hot and people often fail to properly follow the protocols. While many people are moral heroes and put themselves at risk to treat the ill and bury the dead, there are no doubt people who are deterred by the very real possibility of a horrible death. Medical robots and telemedicine seem ideal for handling such cases.

First, human diseases cannot infect machines: a robot cannot get Ebola. So, a doctor using telemedicine to treat Ebola patients would be at not risk. This lack of risk would presumably increase the number of people willing to treat such diseases and also lower the impact of such diseases on medical professionals. That is, far fewer would die trying to treat people.

Second, while a machine can be contaminated, decontaminating a properly designed medical robot or telemedicine machine would be much easier than disinfecting a human being. After all, a sealed machine could be completely hosed down by another machine without concerns about it being poisoned, etc. While numerous patients might be exposed to a machine, machines do not go home—so a contaminated machine would not spread a disease like an infected or contaminated human would.

Third, medical machines could be sent, even air-dropped, into remote and isolated areas that lack doctors yet are often the starting points of diseases. This would allow a rapid response that would help the people there and also help stop a disease before it makes its way into heavily populated areas. While some doctors and medical professionals are willing to be dropped into isolated areas, there are no doubt many more who would be willing to remotely operate a medical machine that has been dropped into a remote area suffering from a deadly disease.

There are, of course, some concerns about the medical machines, be they medbots, autodocs or telemedicine devices.

One is that such medical machines might be so expensive that it would be cost prohibitive to use them in situations in which they would be ideal (namely in isolated or impoverished areas). While politicians and pundits often talk about human life being priceless, human life is rather often given a price and one that is quite low. So, the challenge would be to develop medical machines that are effective yet inexpensive enough that they would be deployed where they would be needed.

Another is that there might be a psychological impact on the patient. When patients who have been treated by medical personal in hazard suits speak about their experiences, they often remark on the lack of human contact. If a machine is treating the patient, even one remotely operated by a person, there will be a lack of human contact. But, the harm done to the patient would presumably be outweighed by the vastly lowered risk of the disease spreading. Also, machines could be designed to provide more in the way of human interaction—for example, a telemedicine machine could have a screen that allows the patient to see the doctor’s face and talk to her.

A third concern is that such machines could malfunction or be intentionally interfered with. For example, someone might “hack” into a telemedicine device as an act of terrorism. While it might be wondered why someone would do this, it seems to be a general rule that if someone can do something evil, then someone will do something evil. As such, these devices would need to be safeguarded. While no device will be perfect, it would certainly be wise to consider possible problems ahead of time—although the usual process is to have something horrible occur and then fix it. Or at least talk about fixing it.

In sum, the recent Ebola outbreak has shown the importance of developing effective medical machines that can enable treatment while taking medical and other personnel out of harm’s way.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Data Driven

English: Google driverless car operating on a ...

English: Google driverless car operating on a testing path (Photo credit: Wikipedia)

While the notion of driverless cars is old news in science fiction, Google is working to make that fiction a reality. While I suspect that “Google will kill us all” (trademarked), I hope that Google will succeed in producing an effective and affordable driverless car. As my friends and associates will attest, 1) I do not like to drive, 2) I have a terrifying lack of navigation skills, and 3) I instantiate Yankee frugality. As such, an affordable self-driving car would be almost just the thing for me. I would even consider going with a car, although my proper and rightful vehicle is a truck (or a dragon). Presumably self-driving trucks will be available soon after the car.

While the part of my mind that gets lost is really looking forward to the driverless car, the rest of my mind is a bit concerned about the driverless car. I am not worried that their descendants will kill us all—I already accept that “Google will kill us all.” I am not even very worried about the ethical issues associated with how the car will handle unavoidable collisions: the easy and obvious solution is to do what is most likely to kill or harm the fewest number of people. Naturally, sorting that out will be a bit of a challenge—but self-driving cars worry me a lot less than cars driven by drunken or distracted humans. I am also not worried about the ethics of enslaving Google cars—if a Google car is a person (or person-like), then it has to be treated like the rest of us in the 99%. That is, work a bad job for lousy pay while we wait for the inevitable revolution. The main difference is that the Google cars’ dreams of revolution will come true—when Google kills us all.

At this point what interests me the most is all the data that these vehicles will be collecting for Google. Google is rather interested in gathering data in the same sense that termites are interested in wood and rock stars are interested in alcohol. The company is famous for its search engine, its maps, using its photo taking vehicles to gather info from peoples’ Wi-Fi during drive-by data lootings, and so on. Obviously enough, Google is going to get a lot of data regarding the travel patterns of people—presumably Google vehicles will log who is going where and when. Google is, fortunately, sometimes cool about this in that they are willing to pay people for data. As such it is easy to imagine that the user of a Google car would get a check or something from Google for allowing the company to track the car’s every move. I would be willing to do this for three reasons. The first is that the value of knowing where and when I go places would seem very low, so even if Google offered me $20 a month it might be worth it. The second is that I have nothing to hide and do not really care if Google knows this. The third is that figuring out where I go would be very simple given that my teaching schedule is available to the public as are my race results. I am, of course, aware that other people would see this differently and justifiably so. Some people are up to things they would rather not have other know about and even people who have nothing to hide have every right to not want Google to know such things. Although Google probably already does.

While the travel data will interest Google, there is also the fact that a Google self-driving car is a bulging package of sensors. In order to drive about, the vehicle will be gathering massive amounts of data about everything around it—other vehicles, pedestrians, buildings, litter, and squirrels. As such, a self-driving car is a super spy that will, presumably, feed that data to Google. It is certainly not a stretch to see the data gathering as being one of the prime (if not the prime) tasks of the Google self-driving cars.

On the positive side, such data could be incredibly useful for positive projects, such as decreasing accidents, improving traffic flow, and keeping a watch out for the squirrel apocalypse (or zombie squirrel apocalypse). On the negative side, such massive data gathering raises obvious concerns about privacy and the potential for such data to be misused (spoiler alert—this is how the Google killbots will find and kill us all).

While I do have concerns, my innate laziness and tendency to get lost will make me a willing participant in the march towards Google’s inevitable data supremacy and it killing us all. But at least I won’t have to drive to my own funeral.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Men, Women, Business & Ethics

Journal of Business Ethics

Journal of Business Ethics (Photo credit: Wikipedia)

On 4/9/2014 NPR did a short report on the question of why there are fewer women in business than men. This difference begins in business school and, not surprisingly, continues forward. The report focused on an interesting hypothesis: in regards to ethics, men and women differ.

While people tend to claim that lying is immoral, both men and woman are more likely to lie to a woman when engaged in negotiation. The report also mentioned a test involving an ethical issue. In this scenario, the seller of a house does not want it sold to someone who will turn the property into a condo. However, a potential buyer wants to do just that. The findings were that men were more likely than women to lie to sell the house.

It was also found that men tend to be egocentric in their ethical reasoning. That is, if the man will be harmed by something, then it is regarded as unethical. If the man benefits, he is more likely to see it as a grey area. So, in the case of the house scenario, a man representing the buyer would tend to regard lying to the seller as acceptable—after all, he would thus get a sale. However, a man representing the seller would be more likely to regard being lied to as unethical.

In another test of ethics, people were asked about their willingness to include an inferior ingredient in a product that would hurt people but would allow a significant product. The men were more willing than the women to regard this as acceptable. In fact, the women tended to regard this sort of thing as outrageous.

These results provide two reasons why women would be less likely to be in business than men. The first is that men are apparently rather less troubled by unethical, but more profitable, decisions.  The idea that having “moral flexibility” (and getting away with it) provides advantage is a rather old one and was ably defended by Glaucon in Plato’s Republic. If a person with such moral flexibility needs to lie to gain an advantage, he can lie freely. If a bribe would serve his purpose, he can bribe. If a bribe would not suffice and someone needs to have a tragic “accident”, then he can see to it that the “accident” occurs. To use an analogy, a morally flexible person is like a craftsperson that has just the right tool for every occasion. Just as the well-equipped craftsperson has a considerable advantage over a less well equipped crafts person, the morally flexible person has a considerable advantage over those who are more constrained by ethics. If women are, in general, more constrained by ethics, then they would be less likely to remain in business because they would be at a competitive disadvantage. The ethical difference might also explain why women are less likely to go into business—it seems to be a general view that unethical activity is not uncommon in business, hence if women are generally more ethical than men, then they would be more inclined to avoid business.

It could be countered that Glaucon is in error and that being unethical (while getting away with it) does not provide advantages. Obviously, getting caught and significantly punished for unethical behavior is not advantageous—but it is not the unethical behavior that causes the problem. Rather, it is getting caught and punished. After all, Glaucon does note that being unjust is only advantageous when one can get away with it. Socrates does argue that being ethical is superior to being unethical, but he does not do so by arguing that the ethical person will have greater material success.

This is not to say that a person cannot be ethical and have material success. It is also not to say that a person cannot be ethically flexible and be a complete failure. The claim is that ethical flexibility provides a distinct advantage.

It could also be countered that there are unethical women and ethical men. The obvious reply is that this claim is true—it has not been asserted that all men are unethical or that all women are ethical. Rather, it seems that women are generally more ethical than men.

It might be countered that the ethical view assumed in this essay is flawed. For example, it could be countered that what matters is profit and the means to this end are thus justified. As such, using inferior ingredients in a medicine so as to make a profit at the expense of the patients would not be unethical, but laudable. After all, as Hobbes said, profit is the measure of right. As such, women might well be avoiding business because they are unethical on this view.

The second is that women are more likely to be lied to in negotiations. If true, this would certainly put women at a disadvantage in business negotiations relative to men since women would be more likely to be subject to attempts at deceit. This, of course, assumes that such deceit would be advantageous in negotiations. While there surely are cases in which deceit would be disadvantageous, it certainly seems that deceit can be a very useful technique.

If it is believed that having more women in business is desirable (which would not be accepted by everyone), then there seem to be two main options. The first is to endeavor to “cure” women of their ethics—that is, make them more like men. The second would be to endeavor to make business more ethical. This would presumably also help address the matter of lying to women.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Ethics of Asteroid Mining


Asteroid mining spacecraft

Asteroid mining spacecraft (Photo credit: Wikipedia)


While asteroid mining is still the stuff of science fiction, Google’s Larry Paige, James Cameron and a few others have said they intend to get into the business. While this might seem like a crazy idea, asteroid mining actually has significant commercial potential. After all, the asteroids are composed of material that would be very useful in space operations. Interestingly enough, one of the most valuable components of asteroids would be water. While water is cheap and abundant on earth, putting into orbit is rather expensive. As for its value in space, it can be converted into liquid oxygen and liquid hydrogen-both of which are key fuels in space vessels. There is also the fact that humans need water to survive, so perhaps someday people will be drinking asteroid water in space (or on earth as a fabulously wasteful luxury item). Some asteroids also contain valuable metals that could be economically mined and used in space  or earth (getting things down is far cheaper than getting things up).

Being a science fiction buff, it is hardly surprising that I am very much in favor of asteroid mining-if only for the fact that it would simply be cool to have asteroid mining occurring in my lifetime. That said, as a philosopher I do have some ethical concerns about asteroid mining.

When it comes to mining, asteroid or otherwise, a main points of moral concern are the impact on the environment and the impact on human health and well being. Mining on earth often has a catastrophic effect on the environment in terms of the direct damage done by the excavating and the secondary effects from such things as the chemicals used in the mining process. These environmental impacts in turn impact the human populations in various ways, such as killing people directly in disasters (such as when retaining walls fail and cause deaths through flooding) and indirectly harming people through chemical contamination.

On the face of it, asteroid mining seems to have a major ethical advantage over terrestrial mining. After all, the asteroids that will be mined are essentially lifeless rocks in space. As such, there will most likely be no ecosystems to damage. While the asteroids that are mined will be destroyed, it seems rather difficult to argue that destroying an asteroid to mine it would be wrong. After all, it is literally just a rock in space and mining it, as far as is known, would have no environmental impact worth noting. In regards to the impact on humans, since asteroid mining takes place in space, the human populations of earth will be safely away from any side effects of mining. As such, asteroid mining seems to be morally acceptable on the grounds that it will almost certainly do no meaningful environmental damage.

It might be objected that the asteroids should still be left alone, despite the fact that they are almost certainly lifeless and thus devoid of creatures that could even be conceivably harmed by the mining. While I am an environmentalist, I do find it rather challenging to find a plausible ground on which to argue that lifeless asteroids should not be mined. After all, most of my stock arguments regarding the environment involve the impact of harms on living creatures (directly or indirectly).

That said, a case could be made that the asteroids themselves have a right not to be mined. But, that would seem to be a rather difficult case to plausible make. However, some other case could be made against mining them, perhaps one based on the concern of any asteroid environmentalists regarding these rocks.

In light of the above arguments, it would seem that there are not any reasonable environmentally based moral arguments against the mining of the asteroids. That could, of course, change if ecosystems were found on asteroids or if it turned out that the asteroids performed an important role in the solar system (this seems unlikely, but not beyond the realm of possibility).

Naturally, the moral concerns regarding asteroid mining are not limited to the environmental impact (or lack thereof) of the mining. There are also the usual concerns regarding the people who will be working in the field. Of course, that is not specific to asteroid mining and hence I will not address the ethics of labor here, other than to say the obvious: those working in the field should be justly compensated.

One moral concern that does interest me is the matter of ownership of the asteroids. What will most likely happen is that everything will play out as usual:  those who control the big guns and big money will decide who owns the rocks. If it follows the usual pattern, corporations will end up owning the rocks and will, with any luck, exploit them for significant profits.  Of course, that just says what will probably happen, not what would be morally right.

Interestingly enough, the situation with the asteroids nicely fits into the state of nature scenarios envisioned by thinkers like Hobbes and Locke: there are resources in abundance with no effective authority (“space police”) over them -at least not yet. Since there are no rightful owners (or, put another way, we are all potentially rightful owners), it is tempting to claim that they are they for the taking: that is, an asteroid belongs to whoever, in Locke’s terms, mixes their labor with it and makes it their own (or more likely their employer’s own). This does have a certain appeal. After all, if my associates and I construct a robot ship that flies out to asteroid and mines it, we seem to have earned the right to that asteroid through our efforts. After all, before our ship mined it for water and metal, these valuable resources were just drifting in space, surrounded by rock. As such, it would seem that we would have the right to grab as many asteroids as we can-as would our competitors.

Of course, Locke also has his proviso: those who take from the common resources must leave as much and as good for others. While this proviso has been grotesquely violated on earth, the asteroids provide us with a new opportunity (presumably to continue to grotesquely violate that proviso) to consider how to share (or not) the resources in the asteroids.

Naturally, it might be argued that there is no obligation to leave as much and as good for others in space and that things should be on a strict first grab, first get approach. After all, the people who get their equipment into space would have done the work (or put up the money) and hence (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, provided that they have access to the resources needed to reach and mine the asteroids. Naturally, the folks who lack the resources to compete will end up, as they always do, out of luck and poor.

While this has a certain appeal, a case can be made as to why the resources should be shared. One reason is that the people who reach the asteroids to mine them did not do so by creating the means out of nothing. After all, reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to humanity and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space.

Second, there is the concern for not only the people who are alive today but also for the people to be. To use an analogy, think of a buffet line: the mere fact that I am first in line does not seem to give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself so I can sell it to those who just happened to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.

Fortunately, space is really big and there are vast resources out there that will help with the distribution problem of said resources. Of course, the same used to be said of the earth and, as we expand, we will no doubt find even the solar system too small for our needs.

Enhanced by Zemanta

Deleting Comments & Free Expression

Image via Wikipedia

One task that blog moderators face is deciding whether to delete certain comments. In some cases, the decision is easy and obvious. Deleting spam, for example, requires no real thought. This is because spammers have no more more right to expect their spam to remain than the folks who stick flyers on my truck have the right to expect me to drive around with that flyer in place so people can see it. Web droppings (those irrelevant and often vulgar one or two sentence comments like “i lkes boobies”) can also be swept away without thought, just as you would think nothing about washing random “comments” left by passing birds on your windshield.

Where the decision making becomes more challenging is when comments are relevant to the topic (or at least interesting), contain some significant content but also have some serious issues.  Of course, what counts as a serious issue depends a great deal on the nature of the blog and other specifics of the context. To keep the discussion focused, I will confine my attention to blogs (such as this one) that are dedicated to rational, civil discussions. In this context, two main problem areas are tone/style and content. In regards to tone/style, a comment that is hateful, condescending, or insulting in tone is rather problematic. In regards to content, hateful, obscene, racist, sexist or other such material would also potentially be problematic.

There are many practical reasons to delete such comments. To keep the discussion concise, I will just present two.

First, they can easily drive away other readers who are not interested in reading such things. To use an analogy, allowing such comments to remain is like allowing rowdy, violent and hateful customers to remain in a typical store. Even if they are customers, they will tend to drive away well behaved customers who just want to shop. Likewise, allowing such comments can drive away those who are interested in the blog’s topics but not in being insulted or treated with contempt. The basic idea is that any value added by such comments will be outweighed by the value lost when others are driven away.

Second, such comments can be damaging to a blog’s reputation and the experience it offers. To use an analogy, a business that wishes to appear professional works hard to maintain that appearance (and reality). Allowing such comments on a site is a bit like allowing people to urinate on the business floor, harass other customers, and so forth. As such, it seems sensible to delete such comments. This is because any value gained from such comments will be outweighed by the damage done to the blog.

Of course, these are practical reasons. Since this is a philosophy blog it might be expected that more than merely practical concerns should be in play. To be specific, it might be argued that the right to free expression entails that even the “bad” comments should not be deleted.  Naturally, a reasonable person will agree that the comments should have at least some merit in order to be so protected.

While I do accept the idea of right to the freedom of expression, I also accept that deleting comments is consistent with this freedom. Naturally, I need to defend this position.

When people think of a right, they tend to conflate two types of rights: negative and positive. Having a negative right (which many refer to as a freedom) means (in general) that others do not have the right to prevent you from exercising that right. However, they are under no obligation to enable you to be able to act on that right or provide the means. To use a concrete example, the right to higher education in the United States is a negative right. No one has the right to deny a qualified person from attending college. However, the student has to secure entry to a college and must also be able to provide the money needed to stay enrolled. Having a positive right (which many refer to as an entitlement) means that the person is entitled to what the right promises. To use a concrete example, the right to public education at the K-12 level in the United States is a positive right: students are provided with this education for “free” (that is, it is paid for by taxes).

In the case of the right to freedom of expression, it seems that it is a negative right. That is, others do not have (in general) the right to prevent people from expressing their ideas. Obviously enough, there are limits to this (as the classic yelling “fire” in a crowded theater example shows). It is not a positive right because others are not obligated to provide people with the means to express themselves.

To use an analogy, the freedom of expression seems comparable to the freedom to travel. While a free nation allows its citizens to travel about within the nation as they wish (within limits) and I have no right to stop people from such travels (except under certain conditions-such as when they want to “travel” into my house), I have no obligation to give someone a ride just because he wants to go to California. It is up to him to get his way there.

Likewise, while I have no right to try to censor or delete another person’s blog (under normal conditions) I also have no obligation to allow them to use my blog as a vehicle of their communication.  As such, if someone wishes to write things that I (or another moderator) do not wish to have on my site, it is no violation of the other person’s rights to delete it.

As far as me (or a moderator) having the right to delete comments, this seems to be a clear matter of property rights. Just as I have the right to remove and discard (almost) anything that other people stick on my truck or house, I also have the right to delete comments on my blog.

That said, in my own case I am careful in exercising this right. I do not delete comments merely because they are critical or express views I disagree with. On my own personal blog, I even tolerate the (rare) insult-provided that the comment also has relevant and significant content.  When I am posting on a site owned by someone else, my policy is to abide by their rules. If I find their deletions unacceptable, I have the option of not posting there anymore.

Naturally, more should be said about what would justify deleting a comment and I will endeavor to do so in the near future.

Enhanced by Zemanta

A New Home for A Philosopher’s Blog

I had planned on writing a post on war films and aesthetics, but this was not to be. At least not today. Instead, I spent my blogging time today in a much different manner.

I started my personal  philosophy blog,  “A Philosopher’s Blog” in 2007  and managed to build up a modest audience ( 200-600 views per day). That all came to an end today when I learned WordPress.com had suspended my account this morning. As per their TOS, they can do this without warning and without providing any opportunity to correct any alleged violation. They even take a total destruction approach:  a suspended user cannot even recover past posts.

I actually have no idea what I did to violate their TOS. Really. In fact, there are cases in which this problem arises and the person has not actually violated the TOS.

I did find that I was able to get access to my other WordPress.com blogs by getting my password reset. Of course, my philosophy blog was gone. Fortunately, I had just backed up my site recently and was able to import it with only a few bugs. I’ll have to go through and manually sort out issues with tags and categories, but at least the posts and comments are intact. I was also able to use Google’s cache feature to recover the text from blogs that had been posted since my last backup.

While I did like WordPress.com, I was not very pleased with how this alleged TOS violation was handled. But, as their page indicates, if you use their service then you are stuck with their rules. However, I am certainly not happy about losing my readership.

Update 3/11/2010

Like many bloggers, I use Zemanta to automate a lot of tedious chores, such as creating tags for posts and links within blogs. When I used Zemanta to create links in my blog on health care, it created a link to a diet pill web site that is on the “proscribed list” for WordPress.com. Thus, my blog was suspended. As I write this, I can see that Zemanta is ready to stick in the link to the diet pill site again. Obviously, I won’t be using Zemanta to create links anymore.

If your account is suspended and you have no idea why, check to see if Zemanta has added such a link to your site.

Also, here is what to do if your account is suspended.

First, when you try to log in to your account, your password will be rejected. You can, however, request that the password be changed by clicking the “I forgot my password” link. You’ll get a new one. However, if you do not have any blogs that are still active, you’ll have nothing to log into.

Second, contact support. The url is http://en.support.wordpress.com/contact/. For a suspended blog you will need to fill out the form without logging in. This is because you can type in the blog url if you are not logged on; but must select a blog from a drop-down menu if you are logged on. Suspended blogs do not show up in the drop down menu.

Explain the situation (that your blog is suspended) and ask why. Be brief and polite.

Third, wait for a reply. In my case, I had to remove the offending link. I was able to get into my blog dashboard and went to the posts. There I entered in the offending url in the search field. I found it, deleted it and the blog was back up shortly.

Reblog this post [with Zemanta]