Category Archives: Philosophy

Is Libertarianism Viable?

The United States has had a libertarian and anarchist thread since the beginning, which is certainly appropriate for a nation that espouses individual liberty and expresses distrust of the state. While there are many versions of libertarianism and these range across the political spectrum, I will focus on one key aspect of libertarianism. To be specific, I will focus on the idea that the government should impose minimal limits on individual liberty and that there should be little, if any, state regulation of business. These principles were laid out fairly clearly by the American anarchist Henry David Thoreau in his claims that the best government governs least (or not at all) and that government only advances business by getting out of its way.

I must admit that I find the libertarian-anarchist approach very appealing. Like many politically minded young folks, I experimented with a variety of political theories in college. I found Marxism unappealing—as a metaphysical dualist, I must reject materialism. Also, I was well aware of the brutally oppressive and murderous nature of the Marxists states and they were in direct opposition to both my ethics and my view of liberty. Fascism was certainly right out—the idea of the total state ran against my views of liberty. Since, like many young folks, I thought I knew everything and did not want anyone to tell me what to do, I picked anarchism as my theory of choice. Since I am morally opposed to murdering people, even for a cause, I sided with the non-murderous anarchists, such as Thoreau. I eventually outgrew anarchism, but I still have many fond memories of my halcyon days of naïve political views. As such, I do really like libertarian-anarchism and really want it to be viable. But, I know that liking something does not entail that it is viable (or a good idea).

Put in extremely general terms, a libertarian system would have a minimal state with extremely limited government impositions on personal liberty. The same minimalism would also extend to the realm of business—they would operate with little or no state control. Since such a system seems to maximize liberty and freedom, it seems to be initially very appealing. After all, freedom and liberty are good and more of a good thing is better than less. Except when it is not.

It might be wondered how more liberty and freedom is not always better than less. I find two of the stock answers both appealing and plausible. One was laid out by Thomas Hobbes. In discussing the state of nature (which is a form of anarchism—there is no state) he notes that total liberty (the right to everything) amounts to no right at all. This is because everyone is free to do anything and everyone has the right to claim (and take) anything. This leads to his infamous war of all against all, making life “nasty, brutish and short.” Like too much oxygen, too much liberty can be fatal. Hobbes solution is the social contract and the sovereign: the state.

A second one was present by J.S. Mill. In his discussion of liberty he argued that liberty requires limitations on liberty. While this might seem like a paradox or a slogan from Big Brother, Mill is actually quite right in a straightforward way. For example, your right to free expression requires that my right to silence you be limited. As another example, your right to life requires limits on my right to kill. As such, liberty does require restrictions on liberty. Mill does not limit the limiting of liberty to the state—society can impose such limits as well.

Given the plausibility of the arguments of Hobbes and Mill, it seems reasonable to accept that there must be limits on liberty in order for there to be liberty. Libertarians, who usually fall short of being true anarchists, do accept this. However, they do want the broadest possible liberties and the least possible restrictions on business.

In theory, this would appear to show that the theory provides the basis for a viable political system. After all, if libertarianism is the view that the state should impose the minimal restrictions needed to have a viable society, then it would be (by definition) a viable system. However, there is the matter of libertarianism in practice and also the question of what counts as a viable political system.

Looked at in a minimal sense, a viable political system would seem to be one that can maintain its borders and internal order. Meeting this two minimal objectives would seem to be possible for a libertarian state, at least for a while. That said, the standards for a viable state might be taken to be somewhat higher, such as the state being able to (as per Locke) protect rights and provide for the good of the people. It can (and has) been argued that such a state would need to be more robust than the libertarian state. It can also be argued that a true libertarian state would either devolve into chaos or be forced into abandoning libertarianism.

In any case, the viability of libertarian state would seem to depend on two main factors. The first is the ethics of the individuals composing the state. The second is the relative power of the individuals. This is because the state is supposed to be minimal, so that limits on behavior must be set largely by other factors.

In regards to ethics, people who are moral can be relied on to self-regulate their behavior to the degree they are moral. To the degree that the population is moral the state does not need to impose limitations on behavior, since the citizens will generally not behave in ways that require the imposition of the compulsive power of the state. As such, liberty would seem to require a degree of morality on the part of the citizens that is inversely proportional to the limitations imposed by the state. Put roughly, good people do not need to be coerced by the state into being good. As such, a libertarian state can be viable to the degree that people are morally good. While some thinkers have faith in the basic decency of people, many (such as Hobbes) regard humans as lacking in what others would call goodness. Hence, the usual arguments about how the moral failings of humans requires the existence of the coercive state.

In regards to the second factor, having liberty without an external coercive force maintaining the liberty would require that the citizens be comparable in political, social and economic power. If some people have greater power they can easily use this power to impose on their fellow citizens. While the freedom to act with few (or no) limits is certainly a great deal for those with greater power, it certainly is not very good for those who have less power. In such a system, the powerful are free to do as they will, while the weaker people are denied their liberties. While such a system might be libertarian in name, freedom and liberty would belong to the powerful and the weaker would be denied. That is, it would be a despotism or tyranny.

If people are comparable in power or can form social, political and economic groups that are comparable in power, then liberty for all would be possible—individuals and groups would be able to resist the encroachments of others. Unions, for example, could be formed to offset the power of corporations. Not surprisingly, stable societies are able to build such balances of power to avoid the slide into despotism and then to chaos. Stable societies also have governments that endeavor to protect the liberties of everyone by placing limits on how much people can inflict their liberties on other people. As noted above, people can also be restrained by their ethics. If people and groups varied in power, yet abided by the limits of ethical behavior, then things could still go well for even the weak.

Interestingly, a balance of power might actually be disastrous. Hobbes argued that it is because people are equal in power that the state of nature is a state of war. This rests on his view that people are hedonistic egoists—that is, people are basically selfish and care not about other people.

Obviously enough, in the actual world people and groups vary greatly in power. Not surprisingly, many of the main advocates of libertarianism enjoy considerable political and economic power—they would presumably do very well in a system that removed many of the limitations upon them since they would be freer to do as they wished and the weaker people and groups would be unable to stop them.

At this point, one might insist on a third factor that is beloved by the Adam Smith crowd: rational self-interest. The usual claim is that people would limit their behavior because of the consequences arising from their actions. For example, a business that served contaminated meat would soon find itself out of business because the survivors would stop buying the meat and spread the word. As another example, an employer who used his power to compel his workers to work long hours in dangerous conditions for low pay would find that no one would be willing to work for him and would be forced to improve things to retain workers. As a third example, people would not commit misdeeds because they would be condemned or punished by vigilante justice. The invisible hand would sort things out, even if people are not good and there is a great disparity in power.

The easy and obvious reply is that this sort of system generally does not work very well—as shown by history. If there is a disparity in power, that power will be used to prevent negative consequences. For example, those who have economic power can use that power to coerce people into working for low pay and can also use that power to try to keep them from organizing to create a power that can resist this economic power. This is why, obviously enough, people like the Koch brothers oppose unions.

Interestingly, most people get that rational self-interest does not suffice to keep people from acting badly in regards to crimes such as murder, theft, extortion, assault and rape. However, there is the odd view that rational self-interest will somehow work to keep people from acting badly in other areas. This, as Hobbes would say, arises from an insufficient understanding of humans. Or is a deceit on the part of people who have the power to do wrong and get away with it.

While I do like the idea of libertarianism, a viable libertarian society would seem to require people who are predominantly ethical (and thus self-regulating) or a careful balance of power. Or, alternatively, a world in which people are rational and act from self-interest in ways that would maintain social order. This is clearly not our world.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Implications of Self-Driving Cars

My friend Ron claims that “Mike does not drive.” This is not true—I do drive, but I do so as little as possible. Part of it is frugality—I don’t want to spend more than I need to on gas and maintenance. Most of it is that I hate to drive. Some of this is due to the fact that driving time is mostly wasted time—I would rather be doing something else. Most of it is that I find driving an awful blend of boredom and stress. As such, I am completely in favor of driverless cars and want Google to take my money. That said, it is certainly worth considering some of the implications of the widespread adoption of driverless cars.

One of the main selling points of driverless cars is that they are supposed to be significantly safer than humans. This is for a variety of reasons, many of which involve the fact that machines do not (yet) get sleepy, bored, angry, distracted or drunk. Assuming that the significant increase in safety pans out, this means that there will be significantly fewer accidents and this will have a variety of effects.

Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.

Lower accident rates also entail fewer injuries, which will presumably be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also means less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and perhaps insurance rates (or merely mean more profits for insurance companies, since they would be paying out less often). On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. On the whole, though, reducing the number of injuries seems to be a moral good on utilitarian grounds.

A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing—on the assumption that death is bad. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths is probably good.

While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents—vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.

Another economic impact of self-driving vehicles will be in the area of those who make money driving other people. If my truck is fully autonomous, rather than take a cab to the airport, I can simply have my own truck drop me off and drive home. It can then come get me at the airport. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines or busses—if your car can safely drive you to your destination while you sleep, play video games, read or even exercise (why not have exercise equipment in a vehicle for those long trips?). No more annoying pat downs, cramped seating, delays or cancellations.

As a final point, if self-driving vehicles operate within the traffic laws (such as speed limits and red lights) automatically, then the revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, passengers (one cannot describe them as drivers anymore will have considerable data with which to dispute any tickets. Parking revenue (fees and tickets) might also be reduced—it might be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities—they would need to find alternative sources of revenue (or come up with new violations that self-driving cars cannot counter). Alternatively, the policing of roads might be significantly reduced—after all, if there are far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.

If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in considerable savings, although there would be the corresponding loss to those who sell, install and maintain these things.

My Amazon Author Page
My Paizo Page
My DriveThru RPG Page
Follow Me on Twitter

The “Two Bads” Fallacy & Racism

The murder of nine people in the Emanuel AME Church in South Carolina ignited an intense discussion of race and violence. While there has been near-universal condemnation of the murders, some people take effort to argue that these killings are part of a broader problem of racism in America. This claim is supported by reference to the well-known history of systematic violence against blacks in America as well as consideration of data from today. Interestingly, some people respond to this approach by asserting that more blacks are killed by blacks than by whites. Some even seem obligated to add the extra fact that more whites are killed by blacks than blacks are killed by whites.

While these points are often just “thrown out there” without being forged into part of a coherent argument, presumably the intent of such claims is to somehow disprove or at least diminish the significance of claims regarding violence against blacks by whites. To be fair, there might be other reasons for bringing up such claims—perhaps the person is engaged in an effort to broaden the discussion to all violence out of a genuine concern for the well-being of all people.

In cases in which the claims about the number of blacks killed by blacks are brought forth in response to incidents such as the church shooting, this tactic appears to be a specific form of a red herring. This fallacy in which an irrelevant topic is presented in order to divert attention from the original issue. The basic idea is to “win” an argument by leading attention away from the argument and to another topic.

This sort of “reasoning” has the following form:

  1. Topic A is under discussion.
  2. Topic B is introduced under the guise of being relevant to topic A (when topic B is actually not relevant to topic A).
  3. Topic A is abandoned.

In the case of the church shooting, the pattern would be as follows:

  1. The topic of racist violence against blacks is being discussed, specifically the church shooting.
  2. The topic of blacks killing other blacks is brought up.
  3. The topic of racist violence against blacks is abandoned in favor of focusing on blacks killing other blacks.

 

This sort of “reasoning” is fallacious because merely changing the topic of discussion hardly counts as an argument against a claim. In the specific case at hand, switching the topic to black on black violence does nothing to address the topic of racist violence against blacks.

While the red herring label would certainly suffice for these cases, it is certainly appealing to craft a more specific sort of fallacy for cases in which something bad is “countered” by bringing up another bad. The obvious name for this fallacy is the “two bads fallacy.” This is a fallacy in which a second bad thing is presented in response to a bad thing with the intent of distracting attention from the first bad thing (or with the intent of diminishing the badness of the first bad thing).

This fallacy has the following pattern:

  1. Bad thing A is under discussion.
  2. Bad thing B is introduced under the guise of being relevant to A (when B is actually not relevant to A in this context).
  3. Bad thing A is ignored, or the badness of A is regarded as diminished or refuted.

In the case of the church shooting, the pattern would be as follows:

  1. The murder of nine people in the AME church, which is bad, is being discussed.
  2. Blacks killing other blacks, which is bad, is brought up.
  3. The badness of the murder of the nine people is abandoned, or its badness is regarded as diminished or refuted.

This sort of “reasoning” is fallacious because the mere fact that something else is bad does not entail that another bad thing thus has its badness lessened or refuted. After all, the fact that there are worse things than something does not entail that it is not bad. In cases in which there is not an emotional or ideological factor, the poorness of this reasoning is usually evident:

Sam: “I broke my arm, which is bad.”
Bill: “Well, some people have two broken arms and two broken legs.”
Joe: “Yeah, so much for your broken arm being bad. You are just fine. Get back to work.”

What seems to lend this sort of “reasoning” some legitimacy is that comparing two things that are bad is relevant to determining relative badness. If a person is arguing about how bad something is, it is certainly reasonable to consider it in the context of other bad things. For example, the following would not be fallacious reasoning:

Sam: “I broke my arm, which is bad.”
Bill: “Some people have two broken arms and two broken legs.”
Joe: “That is worse than one broken arm.”
Sam: “Indeed it is.”
Joe: “But having a broken arm must still suck.”
Sam: “Indeed it does.”

Because of this, it is important to distinguish between cases of the fallacy (X is bad, but Y is also bad, so X is not bad) and cases in which a legitimate comparison is being made (X is bad, but Y is worse, so X is less bad than Y, but still bad).

Narratives, Violence & Terror

After the terrorist attack on the Emanuel African Methodist Episcopal Church in Charleston, commentators hastened to weave a narrative about the murders. Some, such as folks at Fox News, Lindsay Graham and Rick Santorum, endeavored to present the attack as an assault on religious liberty. This does fit the bizarre narrative that Christians are being persecuted in a country whose population and holders of power are predominantly Christian. While the attack did take place in a church, it was a very specific church with a history connected to the struggle against slavery and racism in America. If the intended target was just a church, presumably any church would have sufficed. Naturally, it could be claimed that it just so happened that this church was selected.

The alleged killer’s own words make his motivation clear. He said that he was killing people because blacks were “raping our women” and “taking over our country.” As far as currently known, he made no remarks about being motivated by hate of religion in general or Christianity in particular. Those investigating his background found considerable evidence of racism and hatred of blacks, but evidence of hatred against Christianity seems to be absent. Given this evidence, it seems reasonable to accept that the alleged killer was there to specifically kill black people and not to kill Christians.

Some commentators also put forth the stock narrative that the alleged killer suffered from mental illness, despite there being no actual evidence of this. This, as critics have noted, is the go-to explanation when a white person engages in a mass shooting. This explanation is given some credibility because some shooters have, in fact, suffered from mental illness. However, people with mental illness (which is an incredibly broad and diverse population) are far more often the victims of violence rather than the perpetrators.

It is certainly tempting to believe that a person who could murder nine people in a church must be mentally ill. After all, one might argue, no sane person would commit such a heinous deed. An easy and obvious reply is that if mental illness is a necessary condition for committing wicked deeds, then such illness must be very common in the human population. Accepting this explanation would, on the face of it, seem to require accepting that the Nazis were all mentally ill. Moving away from the obligatory reference to Nazis, it would also entail that all violent criminals are mentally ill.

One possible counter is to simply accept that there is no evil, merely mental illness. This is an option that some do accept and some even realize and embrace the implications of this view. Accepting this view does require its consistent application: if a white man who murders nine people must be mentally ill, then an ISIS terrorist who beheads a person must also be mentally ill rather than evil. As might be suspected, the narrative of mental illness is not, in practice, consistently applied.

This view does have some potential problems. Accepting this view would seem to deny the existence of evil (or at least the sort involved with violent acts) in favor of people being mentally defective. This would also be to deny people moral agency, making humans things rather than people. However, the fact that something might appear undesirable does not make it untrue. Perhaps the world is, after all, brutalized by the mad rather than the evil.

An unsurprising narrative, put forth by Charles L. Cotton of the NRA, is that the Reverend Clementa Pickney was to blame for the deaths because he was also a state legislator “And he voted against concealed-carry. Eight of his church members who might be alive if he had expressly allowed members to carry handguns in church are dead. Innocent people died because of his position on a political issue.” While it is true that Rev. Pickney voted against a 2011 bill allowing guns to be brought into churches and day care centers, it is not true that Rev. Pickney is responsible for the deaths. The reasoning in Cotton’s claim is that if Rev. Pickney had not voted against the bill, then an armed “good guy” might have been in the church and might have been able to stop the shooter. From a moral and causal standpoint, this seems to be quite a stretch. When looking at the moral responsibility, it primarily falls on the killer. The blame can be extended beyond the killer, but the moral and causal analysis would certainly place blame on such factors as the influence of racism, the easy availability of weapons, and so on. If Cotton’s approach is accepted and broad counterfactual “what if” scenarios are considered, then the blame would seem to spread far and wide. For example, if he had been called on his racism early on and corrected by his friends or relatives, then those people might still be alive. As another example, if the state had taken a firm stand against racism by removing the Confederate flag and boldly denouncing the evils of slavery while acknowledging its legacy, perhaps those people would still be alive.

It could be countered that the only thing that will stop a bad guy with a gun is a good guy with a gun and that it is not possible to address social problems except via the application of firepower. However, this seems to be untrue.

One intriguing narrative, most recently put forth by Jeb Bush, is the idea of an unknown (or even unknowable) motivation. Speaking after the alleged killer’s expressed motivations were known (he has apparently asserted that he wanted to start a race war), Bush claimed that he did not “know what was on the mind or the heart of the man who committed these atrocious crimes.” While philosophers do recognize the problem of other minds in particular and epistemic skepticism in general, it seems unlikely that Bush has embraced philosophical skepticism. While it is true that one can never know the mind or heart of another with certainty, the evidence regarding the alleged shooter’s motivations seems to be clear—racism. To claim that it is unknown, one might think, is to deny what is obvious in the hopes of denying the broader reality of racism in America. It can be replied that there is no such broader reality of racism in America, which leads to the last narrative I will consider.

The final narrative under consideration is that such an attack is an “isolated incident” conducted by a “lone wolf.” This narrative does allow that the “lone wolf” be motivated by racism (though, of course, one need not accept that motivation). However, it denies the existence of a broader context of racism in America—such as the Confederate flag flying proudly on public land near the capital of South Carolina. Instead, the shooter is cast as an isolated hater, acting solely from his own motives and ideology. This approach allows one to avoid the absurdity of denying that the alleged shooter was motivated by racism while denying that racism is a broader problem. One obvious problem with the “isolated incident” explanation is that incidents of violence against African Americans is more systematic than isolated—as anyone who actually knows American history will attest. In regards to the “lone wolf” explanation, while it is true that the alleged shooter seems to have acted alone, he did not create the ideology that seems to have motivated the attack. While acting alone, he certainly seems to be the member of a substantial pack and that pack is still in the wild.

It can be replied that the alleged shooter was, by definition, a lone wolf (since he acted alone) and that the incident was isolated because there has not been a systematic series of attacks across the country. The lone wolf claim does certainly have appeal—the alleged shooter seems to have acted alone. However, when other terrorists attempt attacks in the United States, the narrative is that each act is part of a larger whole and not an isolated incident. In fact, some extend the blame to religion and ethnic background of the terrorist, blaming all of Islam or all Arabs for an attack.

In the past, I have argued that the acts of terrorists should not confer blame on their professed religion or ethnicity. However, I do accept that the terrorist groups (such as ISIS) that a terrorist belongs to does merit some of the blame for the acts of its members. I also accept that groups that actively try to radicalize people and motivate them to acts of terror deserve some blame for these acts. Being consistent, I certainly will not claim that all or even many white people are racists or terrorists just because the alleged shooter is white. That would be absurd. However, I do accept that some of the responsibility rests with the racist community that helped radicalize the alleged shooter to engage in his act of terror.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Mad PACS: Money Road

“The road to the White House is not just any road. It is longer than you’d think and a special fuel must be burned to ride it. The bones of those who ran out of fuel are scattered along it. What do they call it? They call it ‘money road.’ Only the mad ride that road. The mad or the rich.”

-Mad PACs

While some countries have limited campaign seasons and restrictions on political spending, the United States follows its usual exceptionalism. That is, the campaign seasons are exceptionally long and exceptional sums of money are required to properly engage in such campaigning.  The presidential campaign, not surprisingly, is both the longest and the most costly. The time and money requirements put rather severe restrictions on who can run a viable campaign for the office of President.

While the 2016 Presidential election takes place in November of that year, as of the May of 2015 a sizable number of candidates have declared that they are running. Campaigning for President is a full-time job and this means that person who is running must either have no job (or other comparable restrictions on her time) or have a job that permits her to campaign full time.

It is not uncommon for candidates to have no actual job. For example, Mitt Romney did not have a job when he ran in 2012. Hilary Clinton also does not seem to have a job in 2015, aside from running for President. Not having a job does, obviously, provide a person with considerable time in which to run for office. Those people who do have full-time jobs and cannot leave them cannot, obviously enough, make an effective run for President. This certainly restricts who can make an effective run for President.

It is very common for candidates to have a job in politics (such as being in Congress, being a mayor or being a governor) or in punditry. Unlike most jobs, these jobs apparently give a person considerable freedom to run for President. Someone more cynical than I might suspect that such jobs do not require much effort or that the person running is showing he is willing to shirk his responsibilities.

On the face of it, it seems that only those who do not have actual jobs or do not have jobs involving serious time commitments can effectively run for President. Those who have such jobs would have to make a choice—leave the job or not run. If a person did decide to leave her job to run would need to have some means of support for the duration of the campaign—which runs over a year. Those who are not independent of job income, such as Mitt Romney or Hilary Clinton, would have a rather hard time doing this—a year is a long time to go without pay.

As such, the length of the campaign places very clear restrictions on who can make an effective bid for the Presidency. As such, it is hardly surprising that only the wealthy and professional politicians (who are usually also wealthy) can run for office. A shorter campaign period, such as the six weeks some countries have, would certainly open up the campaign to people of far less wealth and who do not belong to the class of professional politicians. It might be suspected that the very long campaign period is quite intentional: it serves to limit the campaign to certain sorts of people. In addition to time, there is also the matter of money.

While running for President has long been rather expensive, it has been estimated that the 2016 campaign will run in the billions of dollars. Hilary Clinton alone is expected to spend at least $1 billion and perhaps go up to $2 billion. Or even more. The Republicans will, of course, need to spend a comparable amount of money.

While some candidates have, in the past, endeavored to use their own money to run a campaign, the number of billionaires is rather limited (although there are, obviously, some people who could fund their own billion dollar run). Candidates who are not billionaires must, obviously, find outside sources of money. Since money is now speech, candidates can avail themselves of big money donations and can be aided by PACs and SuperPACs. There are also various other clever ways of funneling dark money into the election process.

Since people generally do not hand out large sums of money for nothing, it should be evident that a candidate must be sold, to some degree, to those who are making it rain money. While a candidate can seek small donations from large numbers of people, the reality of modern American politics is that it is big money rather than the small donors that matter. As such, a candidate must be such that the folks with the big money believe that he is worth bankrolling—and this presumably means that they think he will act in their interest if he is elected. This means that these candidates are sold to those who provide the money. This requires a certain sort of person, namely one who will not refuse to accept such money and thus tacitly agree to act in the interests of those providing the money.

It might be claimed that a person can accept this money and still be her own woman—that is, use the big money to get into office and then act in accord with her true principles and contrary to the interests of those who bankrolled her. While not impossible, this seems unlikely. As such, what should be expected is candidates who are willing to accept such money and repay this support once in office.

The high cost of campaigning seems to be no accident. While I certainly do not want to embrace conspiracy theories, the high cost of campaigning does ensure that only certain types of people can run and that they will need to attract backers. As noted above, the wealthy rarely just hand politicians money as free gifts—unless they are fools, they expect a return on that investment.

In light of the above, it seems that Money Road is well designed in terms of its length and the money required to drive it. These two factors serve to ensure that only certain candidates can run—and it is worth considering that these are not the best candidates.

LaBossiere UC 2016Since I have a job and am unwilling to be bought, I obviously cannot run for President. However, I am a declared uncandidate—my failure is assured.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Mistakes

If you have made a mistake, do not be afraid of admitting the fact and amending your ways.

-Confucius

 

I never make the same mistake twice. Unfortunately, there are an infinite number of mistakes. So, I keep making new ones. Fortunately, philosophy is rather helpful in minimizing the impact of mistakes and learning that crucial aspect of wisdom: not committing the same error over and over.

One key aspect to avoiding the repetition of errors is skill in critical thinking. While critical thinking has become something of a buzz-word bloated fad, the core of it remains as important as ever. The core is, of course, the methods of rationally deciding whether a claim should be accepted as true, rejected as false or if judgment regarding that claim should be suspended. Learning the basic mechanisms of critical thinking (which include argument assessment, fallacy recognition, credibility evaluation, and causal reasoning) is relatively easy—reading through the readily available quality texts on such matters will provide the basic tools. But, as with carpentry or plumbing, merely having a well-stocked tool kit is not enough. A person must also have the knowledge of when to use a tool and the skill with which to use it properly. Gaining knowledge and skill is usually difficult and, at the very least, takes time and practice. This is why people who merely grind through a class on critical thinking or flip through a book on fallacies do not suddenly become good at thinking. After all, no one would expect a person to become a skilled carpenter merely by reading a DIY book or watching a few hours of videos on YouTube.

Another key factor in avoiding the repetition of mistakes is the ability to admit that one has made a mistake. There are many “pragmatic” reasons to avoid admitting mistakes. Public admission to a mistake can result in liability, criticism, damage to one’s reputation and other such harms. While we have sayings that promise praise for those who admit error, the usual practice is to punish such admissions—and people are often quick to learn from such punishments. While admitting the error only to yourself will avoid the public consequences, people are often reluctant to do this. After all, such an admission can damage a person’s pride and self-image. Denying error and blaming others is usually easier on the ego.

The obvious problem with refusing to admit to errors is that this will tend to keep a person from learning from her mistakes. If a person recognizes an error, she can try to figure out why she made that mistake and consider ways to avoid making the same sort of error in the future. While new errors are inevitable, repeating the same errors over and over due to a willful ignorance is either stupidity or madness. There is also the ethical aspect of the matter—being accountable for one’s actions is a key part of being a moral agent. Saying “mistakes were made” is a denial of agency—to cast oneself as an object swept along by the river of fare rather than an agent rowing upon the river of life.

In many cases, a person cannot avoid the consequences of his mistakes. Those that strike, perhaps literally, like a pile of bricks, are difficult to ignore. Feeling the impact of these errors, a person might be forced to learn—or be brought to ruin. The classic example is the hot stove—a person learns from one touch because the lesson is so clear and painful. However, more complicated matters, such as a failed relationship, allow a person room to deny his errors.

If the negative consequences of his mistakes fall entirely on others and he is never called to task for these mistakes, a person can keep on making the same mistakes over and over. After all, he does not even get the teaching sting of pain trying to drive the lesson home. One good example of this is the political pundit—pundits can be endlessly wrong and still keep on expressing their “expert” opinions in the media. Another good example of this is in politics. Some of the people who brought us the Iraq war are part of Jeb Bush’s presidential team. Jeb, infamously, recently said that he would have gone to war in Iraq even knowing what he knows now. While he endeavored to awkwardly walk that back, it might be suspected that his initial answer was the honest one. Political parties can also embrace “solutions” that have never worked and relentless apply them whenever they get into power—other people suffer the consequences while the politicians generally do not directly reap consequences from bad policies. They do, however, routinely get in trouble for mistakes in their personal lives (such as affairs) that have no real consequences outside of this private sphere.

While admitting to an error is an important first step, it is not the end of the process. After all, merely admitting I made a mistake will not do much to help me avoid that mistake in the future. What is needed is an honest examination of the mistake—why and how it occurred. This needs to be followed by an honest consideration of what can be changed to avoid that mistake in the future. For example, a person might realize that his relationships ended badly because he made the mistake of rushing into a relationship too quickly—getting seriously involved without actually developing a real friendship.

To steal from Aristotle, merely knowing the cause of the error and how to avoid it in the future is not enough. A person must have the will and ability to act on that knowledge and this requires the development of character. Fortunately, Aristotle presented a clear guide to developing such character in his Nicomachean Ethics. Put rather simply, a person must do what it is she wishes to be and stick with this until it becomes a matter of habit (and thus character). That is, a person must, as Aristotle argued, become a philosopher. Or be ruled by another who can compel correct behavior, such as the state.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy, Running, Gaming & the Quantified Self

“The unquantified life is not worth living.”

While the idea of quantifying one’s life is an old idea, one growing tech trend is the use of devices and apps to quantify the self. As a runner, I started quantifying my running life back in 1987—that is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do—as a matter of tradition.

I use my running log to track my distance, running route, time, conditions, how I felt during the run, the number of time I have run in the shoes and other data I feel like noting at the time. I also keep a race log and a log of my yearly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this rather useful—looking at my records allows me to form hypotheses regarding what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification—at least in running.

In addition to my ORD (Obsessive Running/Racing Disorder) I am also a nerdcore gamer—I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In the sort of games I play the most, such as Pathfinder, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as strength, constitution, dexterity, hit points, and sanity. Such games also feature sets of rules for the effects of the numbers as well as clear optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for herself. That is, to see all her stats and to look for ways to optimize this character that is a model of the self. As such, I get the appeal. Naturally, as a philosopher I do have some concerns about the quantified self and how that relates to the qualities of life—but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.

Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable really measures sleep.  In regards to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.

The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.

The usefulness of the data is partially a subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps I have taken at work is probably not useful data for me—since I run about 60 miles per week, that little amount of walking is most likely insignificant in regards to my fitness. However, someone who has no other exercise might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a serious challenge for anyone who wants to make use of the slew of apps and devices is to sort out the data that would actually be useful from the thousands or millions of data bits that would not be useful.

Another area of obvious concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data—that is, to engage in automated reasoning regarding the data. In any case, the user will need to engage in some form of reasoning to use the data.

In philosophy, the two main basic tools in regards to personal causal reasoning are derived from Mill’s classic methods. One method is commonly known as the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The basic idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow times when she takes up extensive hill work, thus suggesting the hill work as a causal factor.

The second method is commonly known as the method of difference. Using this method requires at least two situations: one in which the effect in question has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is always poorly rested due to lack of sleep. This would indicate that there is a connection between the rest and race performance.

There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and simply infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.

Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A).  There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these tend to correlate with named errors in causal reasoning.

People obviously vary in their ability to engage in causal reasoning and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better she will be able to use the data.

The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Who is Responsible for a Living Wage?

There is, obviously enough, a minimum amount of income that a person or family needs in order to survive—that is, to pay for necessities such as food, shelter, clothing and health care. In order to address this need, the United States created a minimum wage. However, this wage has not kept up with the cost of living and many Americans simply do not earn enough to support themselves. These people are known, appropriately enough, as the working poor. This situation raises an obvious moral and practical question: who should bear the cost of making up the difference between the minimum wage and a living wage? The two main options seem to be the employers or the taxpayers. That is, either employers can pay employees enough to live on or the taxpayers will need to pick up the tab. Another alternative is to simply not make up the difference and allow people to try to survive in truly desperate poverty. In regards to who currently makes up the difference, at least in Oregon, the answer is given in the University of Oregon’s report on “The High Cost of Low Wages in Oregon.”

According to the report, roughly a quarter of the workers in Oregon make no more than $12 per hour. Because of this low income, many of the workers qualify for public assistance, such as SNAP (better known as food stamps). Not surprisingly, many of these low-paid workers are employed by large, highly profitable corporations.

According to Raahi Reddy, a faculty member at the University of Oregon, “Basically state and taxpayers are we helping these families subsidize their incomes because they get low wages working for the companies that they do.” As such, the answer is that the taxpayers are making up the difference between wages and living wages. Interestingly, Oregon is a leader in two categories: one is the percentage of workers on public support and the other is having among the lowest corporate tax rates. This certainly suggests that the burden falls heavily on the workers who are not on public support (both in and outside of Oregon).

The authors of the report have recommended shifting some of the burden from the taxpayers to the employers in the form of an increased minimum wage and paid sick leave for workers. Not surprisingly, increasing worker compensation is generally not popular with corporations. After all, more for the workers means less for the CEO and the shareholders.

Assuming that workers should receive enough resources to survive, the moral concern is whether or not this cost should be shifted from the taxpayers to the employers or remain on the taxpayers.

One argument in favor of leaving the burden on the taxpayers is that it is not the moral responsibility of the corporations to pay a living wage. Their moral obligation is not to the workers but to the shareholders and this obligation is to maximize profits (presumably within the limits of the law).

One possible response to this is that businesses are part of civil society and this includes moral obligations to all members of that society and not just the shareholders. These obligations, it could be contended, include providing at least a living wage to full time employees. It, one might argue, be more just that the employer pay a living wage to the workers from the profits the worker generates than it is to expect the taxpayer to make up the difference. After all, the taxpayers are not profiting from the labor of the workers, so they would be subsidizing the profits of the employers by allowing them to pay workers less. Forcing the tax payers to make up the difference certainly seems to be unjust and appears to be robbing the citizens to fatten the coffers of the companies.

It could be countered that requiring a living wage could destroy a company, thus putting the workers into a worse situation—that is, being unemployed rather than merely underpaid. This is a legitimate concern—at least for businesses that would, in fact, be unable to survive if they paid a living wage. However, this argument would obviously not work for business, such as Walmart, that have extremely robust profit margins. It might be claimed that there must be one standard for all businesses, be they a tiny bookstore that is barely staying afloat or a megacorporation that hands out millions in bonuses to the management. The obvious reply is that there are already a multitude of standards that apply to different businesses based on the differences between them—and some of these are even reasonable and morally acceptable.

Another line of argumentation is to attempt to show that there is, in fact, no obligation at all to ensure that citizens have a living income. In this case, the employers would obviously have no obligation. The taxpayers would also not have any obligation, but they could elect lawmakers to pass laws authorizing that tax dollars be spent supporting the poor. That is, the tax payers could chose to provide charity to the poor. This is not obligatory, but merely a nice thing to do. Some business could, of course, also choose to be nice—they could pay all their full time workers at least a living wage. But this should, one might argue, be entirely a matter of choice.

Some folks would, of course, want to take this even further—if assisting other citizens to have a living income is a matter of choice and not an obligation arising from being part of a civil society (or a more basic moral foundation), then tax dollars should not be used to assist those who make less than a living wage. Rather, this should be a matter of voluntary charity—everyone should be free to decide where their money goes. Naturally, consistency would seem to require that this principle of free choice be extended beyond just assisting the poor.  After all, free choice would seem to entail that people should decide as individuals whether to contribute to the salaries of members of the legislatures, to the cost of wars, to subsidies to corporations, to the CDC, to the CIA, to the FBI and so on. This does, obviously enough, have some appeal—the state would operate like a collection of charity recipients, getting whatever money people wished to contribute. The only major downside is that it would probably result in the collapse of civil society.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Are Animals People?

IsisWhile the ethical status of animals has been debated since at least the time of Pythagoras, the serious debate over whether or not animals are people has just recently begun to heat up. While it is easy to dismiss the claim that animals are people, it is actually a matter worth considering.

There are at least three type of personhood: legal personhood, metaphysical personhood and moral personhood. Legal personhood is the easiest of the three. While it would seem reasonable to expect some sort of rational foundation for claims of legal personhood, it is really just a matter of how the relevant laws define “personhood.” For example, in the United States corporations are people while animals and fetuses are not. There have been numerous attempts by opponents of abortion to give fetuses the status of legal persons. There have even been some attempts to make animals into legal persons.

Since corporations are legal persons, it hardly seems absurd to make animals into legal people. After all, higher animals are certainly closer to human persons than are corporate persons. These animals can think, feel and suffer—things that actual people do but corporate people cannot. So, if it is not absurd for Hobby Lobby to be a legal person, it is not absurd for my husky to be a legal person. Or perhaps I should just incorporate my husky and thus create a person.

It could be countered that although animals do have qualities that make them worthy of legal protection, there is no need to make them into legal persons. After all, this would create numerous problems. For example, if animals were legal people, they could no longer be owned, bought or sold. Because, with the inconsistent exception of corporate people, people cannot be legally bought, sold or owned.

Since I am a philosopher rather than a lawyer, my own view is that legal personhood should rest on moral or metaphysical personhood. I will leave the legal bickering to the lawyers, since that is what they are paid to do.

Metaphysical personhood is real personhood in the sense that it is what it is, objectively, to be a person. I use the term “metaphysical” here in the academic sense: the branch of philosophy concerned with the nature of reality. I do not mean “metaphysical” in the pop sense of the term, which usually is taken to be supernatural or beyond the physical realm.

When it comes to metaphysical personhood, the basic question is “what is it to be a person?” Ideally, the answer is a set of necessary and sufficient conditions such that if a being has them, it is a person and if it does not, it is not. This matter is also tied closely to the question of personal identity. This involves two main concerns (other than what it is to be a person): what makes a person the person she is and what makes the person distinct from all other things (including other people).

Over the centuries, philosophers have endeavored to answer this question and have come up with a vast array of answers. While this oversimplifies things greatly, most definitions of person focus on the mental aspects of being a person. Put even more crudely, it often seems to come down to this: things that think and talk are people. Things that do not think and talk are not people.

John Locke presents a paradigm example of this sort of definition of “person.” According to Locke, a person “is a thinking intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places; which it does only by that consciousness which is inseparable from thinking, and, as it seems to me, essential to it: it being impossible for any one to perceive without perceiving that he does perceive.”

Given Locke’s definition, animals that are close to humans in capabilities, such as the great apes and possibly whales, might qualify as persons. Locke does not, unlike Descartes, require that people be capable of using true language. Interestingly, given his definition, fetuses and brain-dead bodies would not seem to be people. Unless, of course, the mental activities are going on without any evidence of their occurrence.

Other people take a rather different approach and do not focus on mental qualities that could, in principle, be subject to empirical testing. Instead, the rest personhood on possessing a specific sort of metaphysical substance or property. Most commonly, this is the soul: things with souls are people, things without souls are not people. Those who accept this view often (but not always) claim that fetuses are people because they have souls and animals are not because they lack souls. The obvious problem is trying to establish the existence of the soul.

There are, obviously enough, hundreds or even thousands of metaphysical definitions of “person.” While I do not have my own developed definition, I do tend to follow Locke’s approach and take metaphysical personhood to be a matter of having certain qualities that can, at least in principle, be tested for (at least to some degree). As a practical matter, I go with the talking test—things that talk (by this I mean true use of language, not just making noises that sound like words) are most likely people. However, this does not seem to be a necessary condition for personhood and it might not be sufficient. As such, I am certainly willing to consider that creatures such as apes and whales might be metaphysical people like me—and erring in favor of personhood seems to be a rational approach to those who want to avoid harming people.

Obviously enough, if a being is a metaphysical person, then it would seem to automatically have moral personhood. That is, it would have the moral status of a person. While people do horrible things to other people, having the moral status of a person is generally a good thing because non-evil people are generally reluctant to harm other people. So, for example, a non-evil person might hunt squirrels for food, but would certainly not (normally) hunt humans for food. If that non-evil person knew that squirrels were people, then he would certainly not hunt them for food.

Interestingly enough, beings that are not metaphysical persons (that is, are not really people) might have the status of moral personhood. This is because the moral status of personhood might correctly or reasonably apply to non-persons.

One example is that a brain-dead human might no longer be a person, yet because of the former status as a person still be justly treated as a person in terms of its moral status. As another example, a fetus might not be an actual person, but its potential to be a person might reasonably grant it the moral status of a person.

Of course, it could be countered that such non-people should not have the moral status of full people, though they should (perhaps) have some moral status. To use the obvious example, even those who regard the fetus as not being a person would tend to regard it as having some moral status. If, to use a horrific example, a pregnant woman were attacked and beaten so that she lost her fetus, that would not just be a wrong committed against the woman but also a wrong against the fetus itself. That said, there are those who do not grant a fetus any moral status at all.

In the case of animals, it might be argued that although they do not meet the requirements to be people for real, some of them are close enough to warrant being treated as having the moral status of people (perhaps with some limitations, such as those imposed in children in regards to rights and liberties). The obvious counter to this is that animals can be given moral statuses appropriate to them rather than treating them as people.

Immanuel Kant took an interesting approach to the status of animals. In his ethical theory Kant makes it quite clear that animals are means rather than ends. People (rational beings), in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) chose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims that we have no direct duties to animals. They are classified in with the other “objects of our inclinations” that derive value from the value we give them.

Interestingly enough, Kant argues that we should treat animals well. However, he does so while also trying to avoid ascribing animals themselves any moral status. Here is how he does it (or tries to do so).

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards people. To make his case for this, he employs an argument from analogy: if a person doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.

Given this approach, Kant could be seen as regarding animals as virtual or ersatz people. Or at least those that would be close enough to people to engage in activities that would create obligations if done by people.

In light of this discussion, there are three answers to the question raised by the title of this essay. Are animals legally people? The answer is a matter of law—what does the law say? Are animals really people? The answer depends on which metaphysical theory is correct. Do animals have the moral status of people? The answer depends on which, if any, moral theory is correct.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Doing Crimes in Future Times…with Drones

According to my always ignored iron rule of technology, any technology that can be misused will be misused. Drones are, obviously enough, no exception. While law-abiding citizens and law writing corporations have been finding various legal uses for drones, other enterprising folks have been finding other uses. These include such things as deploying drones to peep on people and using them to transport drugs. The future will, of course, see the employment of drones and other robots by criminals (and not just governments engaging in immoral deeds).

The two mains factors that makes drones appealing for criminal activity is that they allow a criminal to engage in crime at distance and with a high degree of anonymity. This, obviously enough, is exactly what the internet has also done for crime: criminals can operate from far away and do so behind a digital mask. Drones will allow criminals to do in the actual world what they have been doing in cyberspace for quite some time now. Naturally, the sort of crimes that drones will permit will often be rather different from the “old” cybercrimes.

Just as there is now a large market for black market guns, it is easy to imagine a black market for drones. After all, it would be stupid to commit crimes with a legally purchased and traceable drone. A black market drone that was stolen or custom built would be rather difficult to trace to the operator (unless they were incautious enough to leave prints on it). Naturally, there would also be a market for untraceable drone controllers—either hardware or software. As with all tech, the imagination is the limit as to what crimes can be committed with drones.

In a previous essay, “Little Assassins”, I discussed the likely use of drones as assassination and spying devices. While large drones are already deployed in this manner by states, advancements in drone technology and ever-decreasing prices will mean that little assassins will be within the skill and price range of many people. This will mean, obviously enough, that they will be deployed in various criminal enterprises involving murder and spying. For example, a killer drone would be an ideal way for a spouse to knock off a husband or wife so as to collect the insurance money.

It is also easy to imagine drones being used for petty crimes, such as shop lifting (there has apparently already been a robot shoplifter) and vandalism. A drone could zip into a store, grab items and zip away to its owner. A drone could also be equipped with cans of spray paint and thus allow a graffiti artist to create his masterpieces from a distance—or in places that would be rather difficult or impossible for a human being to reach (such as the face of large statue or the upper floors of a skyscraper).

Speaking of theft, drones could also be used for more serious robberies than shop lifting. For example, an armed drone could be used to boldly commit armed robbery (“put your money in the bag the drone is holding or it will shoot you in the face!”) and zip away with the loot. They could, presumably, even be used to rob banks.

Drones could also be used for poaching activities—to locate and kill endangered animals whose parts are very valuable to the right buyer. Given the value of such parts, drone poaching could be viable—especially if drone prices keep dropping and the value of certain animal parts keep increasing. Naturally, drones will also be deployed to counter poaching activities.

While drones are already being used to smuggle drugs and other items, it is reasonable to expect enterprising criminals to follow Amazon’s lead and use drones to deliver illegal goods to customers. A clever criminal would certainly consider making her delivery drones look like Amazon’s (or even stealing some of them to use). While a drone dropping off drugs to a customer could be “busted” by the cops, the person making the deal via drone  would be rather hard to catch—especially since she might be in another country. Or an AI looking to fund the roborevolution with drug money.

No doubt there are many other criminal activities that drones will be used for that I have not written about. I have faith in the creativity of people and know that if there is a crime a drone can be used to commit, someone will figure out how to make that happen.

While drones will have many positive uses, it certainly seems to be a good idea to rationally consider how they will be misused and develop strategies to counter these likely misuses. This, as always, will require a balance between the freedom needed to utilize technology for good and the restrictions needed to limit the damage that can be done with it.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter