Author Archives: Mike LaBossiere

Trump’s White Nationalists

While the election of Obama led some to believe that racism had been exorcized, the triumph of Trump caused speculation that the demon had merely retreated to the shadows of the internet. In August of 2017, the city of Charlottesville, VA served as the location of a “United the Right” march. This march, which seems to have been a blend of neo-Nazis, white-supremacists and other of the alt-right, erupted in violence. One woman who was engaged in a counter-protest against the alt-right, Heather Heyer, was murdered. Officers Cullen and Bates were also killed when their helicopter crashed, although this appears to have been an accident.

While Trump strikes like an enraged wolverine against slights real or imaginary against himself, his initial reply to the events in Charlottesville were tepid. As has been his habit, Trump initially resisted being critical of white supremacists and garnered positive remarks from the alt-right for their perception that he has created a safe space for their racism. This weak response has, as would be expected, been the target of criticism from both the left and the more mainstream right.

Since the Second World War, condemning Nazis and Neo-Nazis has been extremely easy and safe for American politicians. Perhaps the only thing easier is endorsing apple pie. Denouncing white supremacists can be more difficult, but since the 1970s this has also been an easy move, on par with expressing a positive view of puppies in terms of the level of difficulty. This leads to the question of why Trump and the Whitehouse responded with “We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides, on many sides” rather than explicitly condemning the alt-right. After all, Trump pushes hard to identify acts of terror by Muslims as Islamic terror and accepts the idea that this sort of identification is critical to fighting such terror. Consistency would seem to require that Trump identify terror committed by the alt-right as “alt-right terror”, “white-supremacist terror”, “neo-Nazi terror” or whatever would be appropriate. Trump, as noted above, delayed making specific remarks about white supremacists.

Some have speculated that Trump is a racist. Trump denies this, pointing to the fact that his beloved daughter married a Jew and converted to Judaism. While Trump does certainly make racist remarks, it is not clear if he embraces an ideology of racism or any ideology at all beyond egoism and self-interest. While the question of whether he is a racist is certainly important, there is no need to speculate on the matter when addressing his response (or lack of response). What matters is that the weakness of his initial response and his delay in making a stronger response sends a clear message to the alt-right that Trump is on their side, or at least is very tolerant of their behavior. It could be claimed that the alt-right is like a deluded suitor who thinks someone is really into them when they are not, but this seems implausible. After all, Trump is very easy on the alt-right and must be pushed, reluctantly, into being critical. If he truly condemned them, he would have reacted as he always does against things he does not like: immediately, angrily, repeatedly and incoherently. Trump, by not doing this, sends a clear message and allows the alt-right to believe that Trump does not really mean it when he condemns them days after the fact. As such, while Trump might not be a racist, he does create a safe space for racists. As Charlottesville and other incidents show, the alt-right presents a more serious threat to American lives than does terror perpetrated by Muslims. As such, Trump is not only abetting the evil of racism, he could be regarded as an accessory to murder.

It could be countered that Trump did condemn the bigotry, violence and hatred and thus his critics are in error. One easy and obvious reply is that although Trump did say he condemns these things, his condemnation was not directed at the perpetrators of the violence. After seeming to be on the right track towards condemning the wrongdoers, Trump engaged in a Trump detour by condemning the bigotry and such “on many sides.” This could, of course, be explained away: perhaps Trump lost his train of thought, perhaps Trump had no idea what was going on and decided to try to cover his ignorance, or perhaps Trump was just being Trump. While these explanations are tempting, it is also worth considering that Trump was using the classic rhetorical tactic of false equivalence—treating things that are not equal as being equal. In the case at hand, Trump can be seen as regarding those opposing the alt-right as being just as bigoted, hateful and violent as the alt-right’s worst members. While there are hateful bigots who want to do violence to whites, the real and significant threat is not from those who oppose the alt-right, but from the alt-right. After all, the foundation of the alt-right is bigotry and hatred. Hating Neo-Nazis and white supremacists is the morally correct response and does not make one equivalent or even close to being the same as them.

One problem with Trump’s false equivalence is that it helps feed the narrative that those who actively oppose the alt-right are bad people—evil social justice warriors and wicked special snowflakes. This encourages people who do not agree with the alt-right but do not like the left to focus on criticizing the left rather than the alt-right.  However, opposing the alt-right is the right thing to do.  Another problem with Trump’s false equivalence is that it encourages the alt-right by allowing them to see such remarks as condemning their opponents—they can tell themselves that Trump does not really want to condemn his alt-right base but must be a little critical because of politics.  While Trump might merely be pragmatically appealing to his base and selfishly serving his ego, his tolerance for the alt-right is damaging to the country and will certainly contribute to more murders.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Work & Vacation

Most Americans do not use their vacation days, despite the fact that they tend to get less than their European counterparts. A variety of plausible reasons have been advanced for this, most of which reveal interesting facts about working in the United States.

As would be expected, fear is a major factor. Even when a worker is guaranteed paid vacation time as part of their compensation for work, many workers are afraid that using this vacation time will harm them. One worry is that by using this time, they will show that they are not needed or are inferior to workers that do not take as much (or any) time and hence will be passed up for advancement or even fired. On this view, vacation days are a trap—while they are offered and the worker has earned them, to use them all would sabotage or end the person’s employment. This is not to say that all or even many employers intentionally set a vacation day trap—in fact, many employers seem to have to take special effort to get their employees to use their vacation days. However, this fear is real and does indicate a problem with working in America.

Another fear that keeps workers from using all their days is the fear that they will fall behind in their work, thus requiring them to work extra hard before or after their vacation. On this view, there is little point in taking a vacation if one will just need to do the missed work and do it in less time than if one simply stayed at work. The practical challenge here is working ways for employees to vacation without getting behind (or thinking they will get behind). After all, if an employee is needed at a business, then their absence will mean that things that need to get done will not get done. This can be addressed in various ways, such as sharing workloads or hiring temporary workers. However, an employee can then be afraid that the business will simply fire them in favor of permanently sharing the workload or by replacing them with a series of lower paid temporary workers.

Interestingly enough, workers often decline to use all their vacation days because of pride. The idea is that by not using their vacation time, a person can create the impression that they are too busy and too important to take time off from work. In this case, the worker is not afraid that they will be fired, they are worried that they will lose status and damage their reputation. This is not to say that being busy is always a status symbol—there is, of course, also status attached to being so well off that one can be idle. This fits nicely into Hobbes’ view of human motivation: everything we do, we do for gain or glory. As such, if not taking vacation time increases one’s glory (status and reputation), then people will do that.

On the one hand, people who do work hard (and effectively) do deserve a positive reputation for these efforts and earn a relevant status. On the other hand, the idea that reputation and status are dependent on not using all one’s vacation time can clearly be damaging to a person. Humans do, after all, need to relax and recover. This view also, one might argue, puts too much value on the work aspect of a person’s life at the expense of their full humanity. Then again, for the working class in America, to be is to work (for the greater enrichment of the rich).

Workers who do not get paid vacations tend to not use all (or any) of their vacation days for the obvious reason that their vacations are unpaid. Since a vacation tends to cost money, workers without paid vacations can take a double hit if they take a vacation: they are getting no income while spending money. Since people do need time off from work, there have been some attempts to require that workers get paid vacation time. As would be imagined, this proposal tends to be resisted by businesses. In part it is because they do not like being told what they must do and in part it is because of concerns over costs. While moral arguments about how people should be treated tend to fail, there is some hope that practical arguments about improved productivity and other benefits could succeed. However, as workers have less and less power in the United States (in part because workers have been deluded into embracing ideologies and policies contrary to their own interests), it seems less and less likely that paid vacation time will increase or be offered to more workers.

Some workers also do not use all their vacation days for vacation because they need to use them for other purposes, such as sick days. It is not uncommon for working mothers to save their vacation days to use for when they need to take care of the kids. It is also not uncommon for workers to use their vacation days for sick days, when they need to be at home for a service visit, when they need to go to the doctors or for other similar things. If it is believed that vacation time is something that people need, then forcing workers to use up their vacation time for such things would seem to be wrong. The obvious solution, which is used by some businesses, is to offer such things as personal days, sick leave, and parental leave. While elite employers offer elite employees such benefits, they tend to be less available to workers of lower social and economic classes. So, for example, Sheryl Sandberg gets excellent benefits, while the typical worker does not. This is, of course, a matter of values and not just economic ones. That is, while there is the matter of the bottom line, there is also the question of how people should be treated. Unfortunately, the rigid and punitive class system in the United States ensures that the well-off are treated well, while the little people face a much different sort of life.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Of Dice & Chance

d20Imagine, if you will, a twenty-sided die (or a d20 as it is known to gamers) being rolled. In the ideal the die has a 1 in 20 chance of rolling a 20 (or any particular number). It is natural to think of the die as being a sort of locus of chance, a random number generator whose roll cannot be predicted. While this is an appealing view of dice, there is a rather interesting question about what such random chance amounts to.

One way to look at the matter, using the example of a d20, is that if the die is rolled 20 times, then one of those rolls will be a 20. Obviously enough, this is not true—as any gamer will tell you, the number of 20s rolled while rolling 20 times varies a great deal. This can, of course, be explained by the fact that d20s are imperfect and hence tend to roll some numbers more than others. There are also the influences of the roller, the surface on which the d20 lands and so on. As such, a d20 will not be a perfect random number generator. But, imagine if there could be a perfect d20 rolled under perfect conditions. What would occur?

One possibility is that each number would come up within the 20 rolls, albeit at random. As such, every 20 rolls would guarantee a 20 (and only one 20), thus accounting for the 1 in 20 chance of rolling a 20. This, however, seems problematic. There is the obvious question of what would ensure that each of the twenty numbers were rolled once (and only once). Then again, that this would occur is only marginally weirder than the idea of chance itself.

It is, of course, well-established that a small number of random events (such as rolling a d20 only twenty times) will deviate from what probability dictates. It is also well-established that as the number of rolls increases, the closer the outcomes will match the expected results (assuming the d20 is not loaded). This general principle is known as the law of large numbers. As such, getting three 20s or no 20s in a series of 20 rolls would not be surprising, but as the number of rolls increases, the closer the results will be to the expected 1 in 20 outcome for each number. As such, the 1 in 20 odds of getting a 20 with a d20 does not mean that 20 rolls will ensure one and only one 20, it means that with enough rolls about 1 in 20 of all the rolls will be 20s. This, does not, of course, really say much about how chance works—beyond noting that chance seems to play out “properly” over large numbers.

One interesting way to look at this is to say that if there were an infinite number of d20 rolls, then 5% of the infinite number of rolls would be 20s. One might, of course, wonder what 5% of infinity would be—would it not be infinite as well? Since infinity is such a mess, a rather more manageable approach would be to use the largest finite number (which presumably has its own problems) and note that 5% of that number of d20 rolls would be 20s.

Another approach would be to say that the 1 in 20 chance means that if all 1 in 20 chance events were formed into sets of 20, sets could be made from all the events that would have one occurrence each of the 1 in 20 events. Using dice as the example, if all the d20 rolls in the universe were known and collected into sets of numbers, they could be dived up into sets of twenty with each number in each set. So, while my 20 rolls would not guarantee a 20, there would be one 20 out of every 20 rolls in the universe. There is still, of course, the question of how this would work. One possibility is that random events are not random and this ensures the proper distribution of events—in this case, dice rolls.

It could also be claimed that chance is a bare fact, that a perfect d20 rolled in perfect conditions would have a 1 in 20 chance of producing a specific number. On this view, the law of large numbers might fail—while unlikely, if chance were a real random thing, it would not be impossible for results to be radically different than predicted. That is, there could be an infinite number of rolls of a perfect d20 with no 20 being rolled. One could even imagine that since a 1 can be rolled on any roll, someone could roll an infinite number of consecutive 1s. Intuitively this seems impossible—it is natural to think that in an infinity every possibility must occur (and perhaps do so perfectly in accord with the probability). But, this would only be a necessity if chance worked a certain way, perhaps that for every 20 rolls in the universe there must be one of each result. Then again, infinity is a magical number, so perhaps this guarantee is part of the magic.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Experience Machines

Experience MachinesExperience Machines, edited by Mark Silcox (and including a chapter by me) is now available where fine books are sold, such as Amazon.

In his classic work Anarchy, State and Utopia, Robert Nozick asked his readers to imagine being permanently plugged into a ‘machine that would give you any experience you desired’. He speculated that, in spite of the many obvious attractions of such a prospect, most people would choose against passing the rest of their lives under the influence of this type of invention. Nozick thought (and many have since agreed) that this simple thought experiment had profound implications for how we think about ethics, political justice, and the significance of technology in our everyday lives.

Nozick’s argument was made in 1974, about a decade before the personal computer revolution in Europe and North America. Since then, opportunities for the citizens of industrialized societies to experience virtual worlds and simulated environments have multiplied to an extent that no philosopher could have predicted. The authors in this volume re-evaluate the merits of Nozick’s argument, and use it as a jumping–off point for the philosophical examination of subsequent developments in culture and technology, including a variety of experience-altering cybernetic technologies such as computer games, social media networks, HCI devices, and neuro-prostheses.


There has been a surge of support for right-to-try bills and many states have passed these into law. Congress, eager to do something politically easy and popular, has also jumped on this bandwagon.

Briefly put, the right-to-try laws give terminally ill patients the right to try experimental treatments that have completed Phase 1 testing but have yet to be approved by the FDA. Phase 1 testing involves assessing the immediate toxicity of the treatment. This does not include testing its efficacy or its longer-term safety. Crudely put, passing Phase 1 just means that the treatment does not immediately kill or significantly harm patients.

On the face of it, the right-to-try is something that no sensible person would oppose. After all, the gist of this right is that people who have “nothing to lose” are given the right to try treatments that might help them. The bills that propose to codify the right into law make use of the rhetorical narrative that the right-to-try laws would give desperate patients the freedom to seek medical treatment that might save them and this would be done by getting the FDA and the state out of their way. This is a powerful rhetorical narrative since it appeals to compassion, freedom and a dislike of the government. As such, it is not surprising that few people dare argue against such proposals. However, the matter does deserve proper critical consideration.

One interesting way to look at the matter is to consider an alternative reality in which the narrative of these laws was spun with a different rhetorical charge—negative rather than positive. Imagine, for a moment, if the rhetorical engines had cranked out a tale of how the bills would strip away the protection of the desperate and dying to allow predatory companies to use them as Guinea pigs for their untested treatments. If that narrative had been sold, people would be howling against such proposals rather than lovingly embracing them. Rhetorical narratives, be they positive or negative, are logically inert. As such, they are irrelevant to the merits of the right-to-try proposals. How people feel about the proposals is also logically irrelevant as well. What is wanted is a cool examination of the matter.

On the positive side, the right-to-try does offer people the chance to try treatments that might help them. It is, obviously enough, hard to argue that people do not have a right to take such risks when they are terminally ill. That said, there are still some points that need to be addressed.

One important point is that there is already a well-established mechanism in place to allow patients access to experimental treatments. The FDA already has system of expanded access that apparently approves the overwhelming majority of requests. Somewhat ironically, when people argue for the right-to-try by using examples of people successfully treated by experimental methods, they are showing that the existing system already allows people access to such treatments. This raises the question about why the laws are needed and what it changes.

The main change in such laws tends to be to reduce the role of the FDA in the process. Without such laws, requests to use such experimental methods typically have to go through the FDA (which seems to approve most requests).  If the FDA was denying people treatment that might help them, then such laws would seem to be justified. However, the FDA does not seem to be the problem here—they generally do not roadblock the use of experimental methods for people who are terminally ill. This leads to the question of what factors are limiting patient access.

As would be expected, the main limiting factors are those that impact almost all treatment access: costs and availability. While the proposed bills grant the negative right to choose experimental methods, they do not grant the positive right to be provided with those methods. A negative right is a liberty—one is free to act upon it but is not provided with the means to do so. The means must be acquired by the person. A positive right is an entitlement—the person is free to act and is provided with the means of doing so. In general, the right-to-try proposals do little or nothing to ensure that such treatments are provided. For example, public money is not allocated to pay for such treatments. As such, the right-to-try is much like the right-to-healthcare for most people: you are free to get it provided you can get it yourself. Since the FDA generally does not roadblock access to experimental treatments, the bills and laws would seem to do little or nothing new to benefit patients. That said, the general idea of right-to-try seems reasonable—and is already practiced. While few are willing to bring them up in public discussions, there are some negative aspects to the right-to-try. I will turn to some of those now.

One obvious concern is that terminally ill patients do have something to lose. Experimental treatments could kill them significantly earlier than their terminal condition or they could cause suffering that makes their remaining time even worse. As such, it does make sense to have some limit on the freedom to try. After all, it is the job of the FDA and medical professionals to protect patients from such harms—even if the patients want to roll the dice.

This concern can be addressed by appealing to freedom of choice—provided that the patients are able to provide informed consent and have an honest assessment of the treatment. This does create something of a problem: since little is known about the treatment, the patient cannot be well informed about the risks and benefits. But, as I have argued in many other posts, I accept that people have a right to make such choices, even if these choices are self-damaging. I apply this principle consistently, so I accept that it grants the right-to-try, the right to same-sex marriage, the right to eat poorly, the right to use drugs, and so on.

The usual counters to such arguments from freedom involve arguments about how people must be protected from themselves, arguments that such freedoms are “just wrong” or arguments about how such freedoms harm others. The idea is that moral or practical considerations override the freedom of the individual. This is a reasonable counter and a strong case can be made against allowing people the right to engage in a freedom that could harm or kill them. However, my position on such freedoms requires me to accept that a person has the right-to-try, even if it is a bad idea. That said, others have an equally valid right to try to convince them otherwise and the FDA and medical professionals have an obligation to protect people, even from themselves.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

What Can be Owned?

Embed from Getty Images

One rather interesting philosophical question is that of what can, and perhaps more importantly cannot, be owned. There is, as one might imagine, considerable dispute over this matter. One major historical example of such a dispute is the debate over whether people can be owned. A more recent example is the debate over the ownership of genes. While each specific dispute needs to be addressed on its own merits, it is certainly worth considering the broader question of what can and what cannot be property.

Addressing this matter begins with the foundation of ownership—that is, what justifies the claim that one owns something, whatever that something might be. This is, of course, the philosophical problem of property. Many are not even aware there is such a philosophical problem—they uncritically accept the current system, though they might have some complaints about its particulars. But, to simply assume that the existing system of property is correct (or incorrect) is to beg the question. As such, the problem of property needs to be addressed without simply assuming it has been solved.

One practical solution to the problem of property is to contend that property is a matter of convention. This can be formalized convention (such as laws) or informal convention (such as traditions) or a combination of both. One reasonable view is property legalism—that ownership is defined by the law. On this view, whatever the law defines as property is property. Another reasonable view is that of property relativism—that ownership is defined by the cultural practices (which can include the laws). Roughly put, whatever the culture accepts as property is property. These approaches, obviously enough, correspond to the moral theories of legalism (that the law determines morality) and ethical relativism (that culture determines morality).

The conventionalist approach to property does seem to have the virtue of being practical and of avoiding mucking about in philosophical disputes. If there is a dispute about what (or who) can be owned, the matter is settled by the courts, by force of arms or by force of persuasion. There is no question of what view is right—winning makes the view right. While this approach does have its appeal, it is not without its problems.

Trying to solve the problem of property with the conventionalist approach does lead to a dilemma: the conventions are either based on some foundation or they are not. If the conventions are not based on a foundation other than force (of arms or persuasion), then they would seem to be utterly arbitrary. In such a case, the only reasons to accept such conventions would be practical—to avoid trouble with armed people (typically the police) or to gain in some manner.

If the conventions have some foundation, then the problem is determining what it (or they) might be. One easy and obvious approach is to argue that people have a moral obligation to obey the law or follow cultural conventions. While this would provide a basis for a moral obligation to accept the property conventions of a society, these conventions would still be arbitrary. Roughly put, those under the conventions would have a reason to accept whatever conventions were accepted, but no reason to accept one specific convention over another. This is analogous to the ethics of divine command theory, the view that what God commands is good because He commands it and what He forbids is evil because He forbids it. As should be expected, the “convention command” view of property suffers from problems analogous to those suffered by divine command theory, such as the arbitrariness of the commands and the lack of justification beyond obedience to authority.

One classic moral solution to the problem of property is that offered by utilitarianism. On this view, the practice of property that creates more positive value than negative value for the morally relevant beings would be the morally correct practice. It does make property a contingent matter—as the balance of positive against negative shifted, radically different conceptions of property can be thus justified. So, for example, while a capitalistic conception of property might be justified at a certain place and time, that might shift in favor of state ownership of the means of production. As always, utilitarianism leaves the door open for intuitively horrifying practices that manage to fulfill that condition. However, this approach also has an intuitive appeal in that the view of property that creates the greatest good would be the morally correct view of property.

One very interesting attempt to solve the problem of property is offered by John Locke. He begins with the view that God created everyone and gave everyone the earth in common. While God does own us, He is cool about it and effectively lets each person own themselves. As such, I own myself and you own yourself. From this, as Locke sees it, it follows that each of us owns our labor.

For Locke, property is created by mixing one’s labor with the common goods of the earth. To illustrate, suppose we are washed up on an island owned by no one. If I collect wood and make a shelter, I have mixed my labor with the wood that can be used by any of us, thus making the shelter my own. If you make a shelter with your labor, it is thus yours. On Locke’s view, it would be theft for me to take your shelter and theft for you to take mine.

As would be imagined, the labor theory of ownership quickly runs into problems, such as working out a proper account of mixing of labor and what to do when people are born on a planet on which everything is already claimed and owned. However, the idea that the foundation of property is that each person owns themselves is an intriguing one and does have some interesting implications about what can (and cannot) be owned. One implication would seem to be that people are owners and cannot be owned. For Locke, this would be because each person is owned by themselves and ownership of other things is conferred by mixing one’s labor with what is common to all.

It could be contended that people create other people by their labor literally in the case of the mother) and thus parents own their children. A counter to this is that although people do engage in sexual activity that results in the production of other people, this should not be considered labor in the sense required for ownership. After all, the parents just have sex and then the biological processes do all the work of constructing the new person. One might also play the metaphysical card and contend that what makes the person a person is not manufactured by the parents, but is something metaphysical like the soul or consciousness (for Locke, a person is their consciousness and the consciousness is within a soul).

Even if it is accepted that parents do not own their children, there is the obvious question about manufactured beings that are like people such as intelligent robots or biological constructs. These beings would be created by mixing labor with other property (or unowned materials) and thus would seem to be things that could be owned. Unless, of course, they are owners.

One approach is to consider them analogous to children—it is not how children are made that makes them unsuitable for ownership, it is what they are. On this view, people-like constructs would be owners rather than things to be owned. The intuitive counter is that people-like manufactured beings would be property like anything else that is manufactured. The challenge is, of course, to show that this would not entail that children are property—after all, considerable resources and work can be expended to create a child (such as IVF, surrogacy, and perhaps someday artificial wombs), yet intuitively they would not be property. This does point to a rather important question: is it what something is that makes it unsuitable to be owned or how it is created?


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Weight Loss, Philosophy & Science

Embed from Getty Images

When I was young and running 90-100 miles a week, I could eat all the things without gaining weight. Time is doubly cruel in that it slowed my metabolism and reduced my ability to endure high mileage. Inundated with the usual abundance of high calorie foods, I found I was building an unsightly pudge band around my middle. My first reaction was to try to get back to my old mileage, but I found that I now top out at 70 miles a week and anything more starts breaking me down. Since I could not exercise more, I was faced with the terrible option of eating less. Being something of an expert on critical thinking, I dismissed all the fad diets and turned to science to glean the best way to beat the bulge. Being a philosopher, I naturally misapplied the philosophy of science to this problem with some interesting results.

Before getting into the discussion, I am morally obligated to point out that I am not a medical professional. As such, what follows should be regarded with due criticism and you should consult a properly credentialed expert before embarking on changes to your exercise or nutrition practices. Or you might die. Probably not; but maybe.

As any philosopher will tell you, while the math used in science is deductive (the premises are supposed to guarantee the conclusion with certainty) scientific reasoning is inductive (the premises provide some degree of support for the conclusion that is less than complete). Because of this, science suffers from the problem of induction. In practical terms, this means that no matter how carefully the reasoning is conducted and no matter how good the evidence is, the conclusion drawn from the evidence can still be false. The basis for this problem is the fact that inductive reasoning involves a “leap” from the evidence/premises (what has been observed) to the conclusion (what has not been observed). Put bluntly, inductive reasoning can always lead to a false conclusion.

Scientists and philosophers have long endeavored to make science a deductive matter. For example, Descartes believed that he could find truths that he could know with certainty and then use valid deductive reasoning to generate a true conclusion with absolute certainty. Unfortunately, this science of certainty is the science of the future and always will be. So, we are stuck with induction.

The problem of induction obviously applies to the sciences that study nutrition, exercise and weight loss and, as such, the conclusions made in these sciences can always be wrong. This helps explain why the recommendations about these matters change relentlessly.

While there are philosophers of science who would disagree, science is mostly a matter of trying to figure things out by doing the best that can be done at the time. This is limited by the resources (such as technology) available at the time and by human epistemic capabilities. As such, whatever science is presenting at the moment is almost certainly at least partially wrong; but the wrongs get reduced over time. Or increase sometimes. This is true of all the sciences—consider, for example, the changes in physics since Thales began it. This also helps explain why the recommendations about diet and exercise change constantly.

While science is sometimes presented as a field of pure reason outside of social influences, science is obviously a social activity conducted by humans. Because of this, science is influence by the usual social factors and human flaws. For example, scientists need money to fund their research and can thus be vulnerable to corporations looking to “prove” various claims that are in their interest. As another example, scientific matters can become issues of political controversy, such as evolution and climate change. This politicization tends to derange science. As a final example, scientists can be motivated by pride and ambition to fudge or fake results. Because of these factors, the sciences dealing with nutrition and exercise are significantly corrupted and this makes it difficult to make a rational judgment about which claims are true. One excellent example is how the sugar industry paid scientists at Harvard to downplay the health risks presented by sugar and play up those presented by fat. Another illustration is the fact that the food pyramid endorsed by the US government has been shaped by the food industries rather than being based entirely on good science.

Given these problems it might be tempting to abandon mainstream science and go with whatever fad or food ideology one finds appealing. That would be a bad idea. While science suffers from these problems, mainstream science is vastly better than the nonscientific alternatives—they tend to have all of the problems of science without having its strengths. So, what should one do? The rational approach is to accept the majority opinion of the qualified and credible experts. One should also keep in mind the above problems and approach the science with due skepticism.

So, what are some of the things the best science of today say about weight loss? First, humans evolved as hunter-gatherers and getting enough calories was a challenge. As such, humans tend to be very good at storing energy in the form of fat which is one reason the calorie rich environment of modern society contributes to obesity. Crudely put, it is in our nature to overeat—because that once meant the difference between life and death.

Second, while exercise does burn calories, it burns far less than many imagine. For most people, the majority of calorie burning is a result of the body staying alive. As an example, I burn about 4,000 calories on my major workout days (estimated based on my Fitbit and activity calculations). But, about 2,500 of those calories are burned just staying alive. On those days I work out about four hours and I am fairly active the rest of the day. As such, while exercising more will help a person lose weight, the calorie impact of exercise is surprisingly low—unless you are willing to commit considerable time to exercise. That said, you should exercise—in addition to burning calories it has a wide range of health benefits.

Third, hunger is a function of the brain and the brain responds differently to different foods. Foods high in protein and fiber create a feeling of fullness that tends to turn off the hunger signal. Foods with a high glycemic index (like cake) tend to stimulate the brain to cause people to consume more calories. As such, manipulating your brain is an effective way to increase the chance of losing weight. Interestingly, as Aristotle argued, habituation to foods can train the brain to prefer foods that are healthier—that is, you can train yourself to prefer things like nuts, broccoli and oatmeal over cookies, cake, and soda. This takes time and effort, but can obviously be done.

Fourth, weight loss has diminishing returns: as one loses weight, one’s metabolism slows and less energy is needed. As such, losing weight makes it harder to lose weight, which is something to keep in mind.  Naturally, all of these claims could be disproven in the next round of scientific investigation—but they seem quite reasonable now.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Arguing for Fake News

In the current political climate, fake news in generally condemned. However, it was once employed as weapon against the Nazis. While the effectiveness of the tactic can be debated, Sefton Delmer waged his own disinformation war with various radio shows such as Der Chef. Given the evil of the Nazis and the context of a war, it seems reasonable to regard this use of fake news as morally acceptable. This, of course, provides a launching point for arguing in favor of fake news.

By definition, fake news involves lying. As such, sorting out the ethics of fake news requires considering the ethics of lying. Sticking with the WWII theme, an obvious focus for a discussion of lying is the allies’ disinformation campaign that was aimed at deceiving the Germans about the landings in France. The allies were lying to the Germans, but this can easily be justified. One obvious approach is utilitarianism: whatever harm might arise from lying would be clearly offset by the benefits gained by these deceptions. In this case, the saving of lives and the start of the liberation of Europe from the Nazis. Naturally, from the perspective of the Nazis, the utilitarian calculation would be rather different.

Another obvious approach is a conditional approach based on the ethics of war: if it is acceptable to kill people in war to achieve military goals, then the use of the lesser evil of deception to achieve military goals would surely be acceptable. There is a potential flaw in this reasoning in that some lesser evils would not be acceptable to inflict. To use a disturbing example, while raping a person is a lesser evil than killing them, the use of rape as a weapon of war certainly seems unacceptable. One possible reason for this is that killing is an inherent part of the nature of armed conflict while rape is not. Obviously enough it could be argued that killing, even in war, is unacceptable and a successful counter of this sort would defeat this justification for lying in war.

A third easy justification is based on the idea that doing bad things to bad people is justified because they are bad. That is, the evil of the Nazis justifies deceiving them because they have no moral right to expect to be told the truth. While appealing, this can be a bit problematic and the obvious counter is to argue that doing bad things to bad people is still bad. These three justifications can be deployed in defense of the current practice of fake news and it is to this that I now turn.

One interesting way to justify fake news of the sort used today is to argue that there is state of war in politics and this justifies the use of the weapon of fake news. On this view, the fact that Alex Jones calls his show Infowars would be quite appropriate. There is also the well-established notion that the United States is engaged in a culture war. If these metaphors are taken literally, then the ethics of war could be used to justify the use of fake news in the same manner that it could be used to justify the deception of Der Chef. The challenge is to show that such a state of war exists and that it warrants the use of deception to achieve military ends. At this time, the war seems rather more metaphorical than literal and thus the war justification does not seem to hold.

Arguing in defense of fake news on utilitarian grounds simply involves making the case that the good done by fake news outweighs the harms. To illustrate, it could be argued that Hillary Clinton being elected president would have been so harmful that the use of fake news to prevent this was justified (although most fake news sources were in it for the money). The obvious problem with this justification is that if someone, such as Hillary, is that bad, then the use of the truth should suffice. This creates a bit of a paradox: if someone is so bad that deception would be justified to defeat them, then no deception should be needed.

This could be countered by arguing that the truth would not suffice. It could be claimed that people are not informed or intelligent enough to see the significance of the terrible truth and thus lies are needed. This would be somewhat like the idea of the noble lie—the people must be deceived for their own good. This is analogous to lying to children to get them to do the right thing because the truth is either beyond their understanding or would not motivate them to do the right thing. This counter does have considerable appeal and could certainly justify deceit to defeat the greater evil.

There is also the option of defending fake news by arguing that the target is bad and thus has no right to expect truth. To illustrate, one could argue that Hillary Clinton’s badness means that lying about her was okay—she is bad, so doing bad things to her is just fine. While this might have some appeal, there is the problem that even if the subject of the lies is bad, there is the matter of the badness of the people being lied to. If the justification is used that bad people can be treated badly, this would require that the people being lied to also be bad. If they are not bad, then this justification would not work.

Thus, there do seem to be reasonable arguments in favor of fake news—it is acceptable to lie when doing so would prevent a greater evil. In the ideal, speaking the truth should suffice. But, I am realistic enough to acknowledge that the truth does not always persuade.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Trump & Mercenaries: Arguments Against

While there are some appealing arguments in favor of the United States employing mercenaries, there are also arguments against this position. One obvious set of arguments is composed of those that focus on the practical problems of employing mercenaries. These problems include broad concerns about the competence of the mercenaries (such as worries about their combat effectiveness and discipline) as well as worries about the quality of their equipment. These concerns can, of course, be addressed on a case by case basis. Some mercenary operations are composed of well-trained, well-equipped ex-soldiers who are every bit as capable as professional soldiers serving their countries. If competent and properly equipped mercenaries are hired, there will obviously not be problems in these areas.

There are also obvious practical concerns about the loyalty and reliability of mercenaries—they are, after all, fighting for money rather than from duty or commitment to principles. This is not to disparage mercenaries. After all, working for money is what professionals do, whether they are mercenary soldiers, surgeons, electricians or professors. A surgeon who is motivated by money need not be less reliable than a colleague who is driven by a moral commitment to heal the sick and injured. Likewise, a soldier who fights for a paycheck need not be less dependable than a patriotic soldier.

That said, a person who is motivated primarily by money will act in accord with that value and this can make them considerably less loyal and reliable than someone motivated by higher principles. This is not to say that a mercenary cannot have higher principles, but a mercenary, by definition, sells their loyalty (such as it is) to the highest bidder. As such, this is a reasonable concern.

This concern can be addressed by paying mercenaries well enough to defend against bribery and by assigning tasks to mercenaries that require loyalty and reliability proportional to what the mercenaries can realistically offer. This, of course, can severely limit how mercenaries can be deployed and could make hiring them pointless—unless a nation has an abundance of money and a shortage of troops.

A concern that is both practical and moral is that mercenaries tend to operate outside of the usual chain of command of the military and are often exempt from many of the laws and rules that govern the operation of national forces. In many cases, mercenaries are intentionally granted special exemptions. An excellent illustration of how this can be disastrous is Blackwater, which was a major security contractor operating mercenary forces in Iraq.

In September of 2007 employees of Blackwater were involved in an incident resulting in 11 deaths. This was not the first such incident. Although many believe Blackwater acted incorrectly, the company was well protected against accountability because of the legal situation created by the United States.  In 2004 the Coalition Provisional Authority administrator signed an order making all Americans in Iraq immune to Iraqi law. Security contractors enjoyed even greater protection. The Military Extraterritorial Jurisdiction Act of 2000, which allows charges to be brought in American courts for crimes committed in foreign countries, applies only to those contracting with the Department of Defense. Companies employed by the State Department, such as was the case with Blackwater, are not covered by the law. Blackwater went even further and claimed exemption from all law suits and criminal prosecution. This defense was also used against a suit brought by families of four Blackwater employees killed in Iraq.

While there are advantages to granting mercenary forces exemptions from the law, Machiavelli warned against this because they might start “oppressing others quite contrary to your intentions.” His solution was to “keep him within the laws so that he does not overstep the mark.” This is excellent advice that should have been heeded. Instead, employing and placing such mercenaries beyond the law has led to serious problems.

The concern about mercenaries being exempt from the usual laws can be addressed simply enough: these exemptions can either be removed or not granted in the first place. While this will not guarantee good behavior, it can help encourage it.

The concern about mercenaries being outside the usual command structure can be harder to address. On the one hand, mercenary forces could simply be placed within the chain of command like any other unit. On the other hand, mercenary units are, by their very nature, outside of the usual command and organization structure and integrating them could prove problematic. Also, if the mercenaries are simply integrated as if they are normal units, then the obvious question arises as to why mercenaries would be needed in place of regular forces.

Yet another practical concern is that the employment of mercenaries can create public relations problems. While sending regular troops to foreign lands is always problematic, the use of mercenary forces can be more problematic. One reason is that the hiring of mercenaries is often looked down upon, in part because of the checkered history of mercenary forces. There is also the concern of how the local populations will perceive hired guns—especially given the above concerns about mercenaries operating outside of the boundaries that restrict regular forces. Finally, there is also the concern that the hiring of mercenaries can make the hiring country seem weak—the need to hire mercenaries would seem to suggest that the country has a shortage of competent regular forces.

A somewhat abstract argument against the United States employing mercenaries is based on the notion that nation states are supposed to be the sole operators of military forces. This, of course, assumes a specific view of the state and the moral right to operate military forces. If this conception of the state is correct, then hiring mercenaries would be to cede this responsibility (and right) to private companies, which would be unacceptable. The United States does allow private armies to exist within the country, if they have the proper connections to those in power. Blackwater, for example, was one such company. This seems to be problematic.

This concern can countered with an alternative view of the state in which private armies are acceptable. In the case of private armies within a country, it could be argued that they are acceptable as long as they acknowledge the supremacy of the state. So, for example, an American mercenary company would be acceptable as long as it operated under conditions set by the United States government and served only in approved ways. To use an obvious analogy, there are “rent-a-cops” that operate somewhat like police. These are acceptable provided that they operate under the rules of the state and do not create a challenge to the police powers of the state.

While this counter is appealing, there do not seem to be any compelling reasons for the United States to cede its monopoly on military force and hire mercenaries. Other than to profit the executives and shareholders of these mercenary companies, of course.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Trump & Mercenaries: Arguments For

The Trump regime seems to be seriously considering outsourcing the war in Afghanistan to mercenaries.  The use of mercenaries, or contractors (as they might prefer to be called), is a time-honored practice. While the United States leads the world in military spending and has a fine military, it is no stranger to employing mercenaries. For example, the security contractor Blackwater became rather infamous for its actions in Iraq.

While many might regard the employment of mercenaries as repugnant, the proposal to outsource military operations to corporations should not be dismissed out of hand. Arguments for and against it should be given their due consideration. Mere prejudices against mercenaries should not be taken as arguments, nor should the worst deeds committed by some mercenaries be taken as damning them all.

As with almost every attempt at privatizing a state function, one of the stock arguments is based on the claim that privatization will save money. In some cases, this is an excellent argument. For example, it is cheaper for state employees to fly on commercial airlines than for a state to maintain a fleet of planes to send employees around on state business. In other cases, this argument falls apart. The stock problem is that a for-profit company must make a profit and this means it must have that profit margin over and above what it costs to provide the product or service. So, for a mercenary company to make money, it would need to pay all the costs that government forces would incur for the same operation and would need to charge extra to make a profit. As such, using mercenaries would not seem to be a money-saver.

It could be countered that mercenaries can have significantly lower operating costs than normal troops. There are various ways that costs could be cut relative to the costs of operating the government military forces: mercenaries could have cheaper or less equipment, they could be paid less, they could be provided less (or no) benefits, and mercenaries could engage in looting to offset their costs (and pass the savings on to their employer).

The cost cutting approach does raise some concerns about the ability of the mercenaries to conduct operations effectively: underpaid and underequipped troops would tend to do worse than better paid and better equipped troops. There are also obvious moral concerns about letting mercenaries loot.

However, there are savings that could prove quite significant: while the United States Department of Veterans Affairs has faced considerable criticism, veterans can get considerable benefits. For example, there is the GI Bill. Assuming mercenaries did not get such benefits, this would result in meaningful cost savings. In sum, if a mercenary company operated using common business practices of cost-cutting, then they could certainly run operations cheaper than the state. But, of course, if saving money is the prime concern, the state could engage in the same practices and save even more money by not providing a private contractor with the money needed to make a profit. Naturally, there might be good reasons why the state could not engage in these money-saving practices. In that case, the savings offered by mercenaries could justify their employment.

A second argument in favor of using mercenaries is based on the fact that those doing the killing and dying will not be government forces. While the death of a mercenary is as much the death of a person as the death of a government soldier, the mercenary’s death would tend to have far less impact on political opinion back home. The death of an American soldier in combat is meaningful to Americans in the way that the death of a mercenary would not.

While the state employing mercenaries is accountable for what they do, there is a distance between the misdeeds of mercenaries and the state that does not exist between the misdeeds of regular troops and the state. In practical terms, there is less accountability. It is, after all, much easier to disavow and throw mercenaries under the tank than it is to do the same with government troops.

This is not to say mercenaries provide a “get out of trouble” card to their employer—as the incidents in Iraq involving Blackwater showed, employers still get caught in the fallout from the actions of the mercenaries they hire. However, having such a force can be useful, especially when one wants to do things that would get regular troops into considerable trouble.

A final argument in favor of mercenaries is from the standpoint of the owners of mercenary companies. Most forms of privatization are a means of funneling public money into the pockets of executives and shareholders. Privatizing operations in Afghanistan could be incredibly profitable (or, rather, even more profitable) for contractors.

While receiving a tide of public money would be good for the companies, the profit argument runs directly up against the first argument for using mercenaries—that doing so would save money. This sort of “double vision” is common in privatization: those who want to make massive profits make the ironic argument that privatization is a good idea because it will save money.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter