Tag Archives: Wikipedia

Gaming Newcomb’s Paradox II: Game Mechanics

La bildo estas kopiita de wikipedia:es. La ori...

(Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

As a philosopher, a game master (a person who runs a tabletop role playing game) and an author of game adventures, I am rather fond of puzzles and paradoxes. As a philosopher, I can (like other philosophers) engage in the practice known as “just making stuff up.” As an adventure author, I can do the same—but I need to present the actual mechanics of each problem, puzzle and paradox. For example, a trap description has to specific exactly how the trap works, how it may be overcome and what happens if it is set off. I thought it would be interesting to look at Newcomb’s Paradox from a game master perspective and lay out the possible mechanics for it. But first, I will present the paradox and two stock attempts to solve it.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to laying out some possible mechanics in gamer terms.

One obvious advantage of crafting the mechanics for a game is that the author and the game master know exactly how the mechanic works. That is, she knows the truth of the matter. While the players in role-playing games know the basic rules, they often do not know the full mechanics of a specific challenge, trap or puzzle. Instead, they need to figure out how it works—which often involves falling into spiked pits or being ground up into wizard burger. Fortunately, Newcomb’s Paradox has very simple game mechanics, but many variants.

In game mechanics, the infallible Predictor is easy to model. The game master’s description would be as follows: “have the player character (PC) playing the Predictor’s game make her choice. The Predictor is infallible, so if the player takes box B, she gets the million. If the player takes both, she gets $1,000.” In this case, the right decision is to take B. After all, the Predictor is infallible. So, the solution is easy.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

A less-than infallible Predictor is also easy to model with dice. The description of the Predictor simply specifies the accuracy of its predictions. So, for example: “The Predictor is correct 99% of the time. After the player character makes her choice, roll D100 (generating a number from 1-100). If you roll 100, the Predictor was wrong. If the PC picked just box B, it is empty and she gets nothing because the Predictor predicted she would take both. If she picked both, B is full and she gets $1,001,000 because the Predictor predicted she would just take one. If you roll 1-99, the Predictor was right. If the PC picked box B, she gets $1,000,000. If she takes both, she gets $1,000 since box B is empty.” In this case, the decision is a gambling matter and the right choice can be calculated by considering the chance the Predictor is right and the relative payoffs. Assuming the Predictor is “almost always right” would make choosing only B the rational choice (unless the player absolutely and desperately needs only $1,000), since the player who picks just B will “almost always” get the $1,000,000 rather than nothing while the player who picks both will “almost always” get just $1,000. But, if the Predictor is “almost always wrong” (or even just usually wrong), then taking both would be the better choice. And so on for all the fine nuances of probability. The solution is relatively easy—it just requires doing some math based on the chance the Predictor is correct in its predictions. As such, if the mechanism of the Predicator is specified, there is no paradox and no problem at all. But, of course, in a role-playing game puzzle, the players should not know the mechanism.

If the game master is doing her job, when the players are confronted by the Predictor, they will not know the predictor’s predictive powers (and clever players will suspect some sort of trick or trap). The game master will say something like “after explaining the rules, the strange being says ‘my predictions are nearly always right/always right’ and sets two boxes down in front of you.” Really clever players will, of course, make use of spells, items, psionics or technology (depending on the game) to try to determine what is in the box and the capabilities of the Predictor. Most players will also consider just attacking the Predictor and seeing what sort of loot it has. So, for the game to be played in accord with the original version, the game master will need to provide plausible ways to counter all these efforts so that the players have no idea about the abilities of the Predictor or what is in box B. In some ways, this sort of choice would be similar to Pascal’s famous Wager: one knows that the Predictor will get it right or it won’t. But, in this case, the player has no idea about the odds of the Predictor being right. In this case, from the perspective of the player who is acting in ignorance, taking both boxes yields a 100% chance of getting $1,000 and somewhere between 0 and 100% chance of getting the extra $1,000,000. Taking the B box alone yields a 100% chance of not getting the $1,000 and some chance between 0% and 100% of getting $1,000,000. When acting in ignorance, the safe bet is to take both: the player walks away with at least $1,000. Taking just B is a gamble that might or might not pay off. The player might walk away with nothing or $1,000,000.

But, which choice is rational can depend on many possible factors. For example, suppose the players need $1,000 to buy a weapon they need to defeat the big boss monster in the dungeon, then picking the safe choice would be the smart choice: they can get the weapon for sure. If they need $1,001,000 to buy the weapon, then picking both would also be a smart choice, since that is the only way to get that sum in this game. If they need $1,000,000 to buy the weapon, then there is no rational way to pick between taking one or both, since they have no idea what gives them the best chance of getting at least $1,000,000. Picking both will get them $1,000 but only gets them the $1,000,000 if the Predictor predicted wrong. And they have no idea if it will get it wrong. Picking just B only gets them $1,000,000 if the Predictor predicted correctly. And they have no idea if it will get it right.

In the actual world, a person playing the game with the Predictor would be in the position of the players in the role-playing game: she does not know how likely it is that the Predictor will get it right. If she believes that the Predictor will probably get it wrong, then she would take both. If she thinks it will get it right, she would take just B. Since she cannot pick randomly (in Nozick’s scenario B is empty if the players decides by chance), that option is not available. As such, Newcomb’s Paradox is an epistemic problem: the player does not know the accuracy of the predictions but if she did, she would know how to pick. But, if it is known (or just assumed) the Predictor is infallible or almost always right, then taking B is the smart choice (in general, unless the person absolutely must have $1,000). To the degree that the Predictor can be wrong, taking both becomes the smarter choice (if the Predictor is always wrong, taking both is the best choice). So, there seems to be no paradox here. Unless I have it wrong, which I certainly do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

kNOwMORE, Sexual Violence & Brands

Florida State University College of Motion Pic...

Florida State UniversityPhoto credit: Wikipedia)

Florida State University, which is across the tracks from my own Florida A&M University, has had some serious problems with sexual violence involving students. One response to this has been the creation of a student driven campaign to address the problem with a brand and marketing:

 

Students developed the “kNOw More” brand to highlight the dual message of Florida State’s no tolerance stance on sexual violence and education efforts focused on prevention. Students also are leading marketing efforts for a campaign, “Ask. Respect. Don’t Expect,” aimed at raising awareness among their peers about obtaining clear consent for sexual activity and bystander intervention to prevent sexual assault or misconduct.

As an ethical person and a university professor, I certainly support efforts to reduce sexual violence on campuses (and anywhere). However, I found the use of the terms “brand” and “marketing efforts” somewhat disconcerting.

The main reason for this is that I associate the term “brand” with things like sodas, snack chips and smart phones rather than with efforts to combat sexual violence in the context of higher education. This sort of association creates, as I see it, some concerns.

The first is that the use of “brand” and “marketing efforts” in the context of sexual violence has the potential to trivialize the matter. Words, as the feminists rightly say, do matter. Speaking in terms of brands and marketing efforts makes it sound like Florida State sees the matter as on par with a new brand of FSU college products that will be promoted by marketing efforts. It would not seem too much to expect that the matter would be treated with more respect in terms of the language used.

The second concern ties back to a piece I wrote in 2011, “The University as a Business.” This essay was written in response to the reaction of Florida A&M University’s president to the tragic death of Florida A&M University student Robert Champion in a suspected hazing incident. The president, who has since resigned, wrote that “preserving the image and the FAMU brand is of paramount importance to me.” The general problem is that thinking of higher education in business terms is a damaging mistake that is harmful to the true mission of higher education, namely education. The specific problem is that addressing terrible things like killing and sexual violence in terms of brands and marketing is morally inappropriate. The brand and marketing view involve the ideas that moral problems are to be addressed in the same manner that one would address a sales decline in chips and this suggests that the problems are mainly a matter of public relations. That is, the creation of an appearance of action rather than effective action.

One obvious reply to my concerns is that terms such as “brand” and “marketing effort” are now the correct terms to use. That is, they are acceptable because of common use and I am thus reading too much into the matter.

On the one hand, that is a reasonable reply—I might be behind the times in terms of the terms. On the other hand, the casual acceptance of business terms in such a context would seem to support my view.

Another reply to my concerns is that the branding and marketing are aimed at addressing the problem of sexual violence and hence my criticism of the terminology is off the mark. This does have some appeal. After all, as people so often say, if the branding and marketing has some positive impact, then that would be good. However, this does not show that my concerns about the terminology and apparent underlying world-view are mistaken.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Obligations to Others: Hunger in America

English: Logo of the .

(Photo credit: Wikipedia)

In my previous essay, I considered various stock arguments in favor of the claim that we have obligations to people we do not know. In this essay I will consider a rather concrete matter of obligation, namely that of hunger in the United States of America.

The United States is known as the wealthiest nation on the planet and also as a country that is facing an obesity epidemic. As such, it probably seems rather odd to claim that America faces a serious problem with hunger. Sadly, this is the case and the matter was featured in Tracie McMillan’s “The New Face of Hunger” in August 2014 issue of National Geographic. Out of a total population of 313.9 million people, 48 million Americans are food insecure, which is a contemporary term for the hungry. In terms of demographics, over half of the food insecure are white and over half are people who live outside of the cities. 72% of recipients are children, senior citizens and the disabled.  Two thirds of families on food stamps have at least one employed adult. The reason why these employed adults need assistance is declining wages: people can work multiple jobs and still not earn enough to buy adequate food. These facts run counter to the usual stereotypes often exploited by politicians.

The United States does have a program to address hunger—what was once called food stamps is now called SNAP (Supplemental Nutrition Assistance Program). While the program paid out $75 billion to about 48 million people in 2013, the average recipient received $133.07 a month (under $1.50 per meal). On average, SNAP recipients run out of money after three weeks and then turn to charity, such as food pantries and other assistance for the hungry. Of the 48 million recipients, 17.6 million lack the resources to provide for even their basic food needs.

The federal government also provides an indirect means of providing food—taxpayer money subsidizes the production of certain crops. Corn gets the lion’s share of subsidies and is distantly followed by wheat and soybeans. Rice, sorghum, peanuts, barley and sunflowers also receive some subsidies while the only subsidized fruit is the apple. Because of the subsidies, food products that include or involve corn, wheat or soybeans tend to be the cheapest. As such, it is not surprising that low-income people get most of their calories from such foods. Examples include sodas, energy drinks, sports drinks, chicken, grain-based desserts, tacos and pizza.  These foods tend to be high calorie and low nutrition foods.

Also impacting the diet of low income people is the existence of food deserts: areas that lack supermarkets but have fast food restaurants and small markets (like convenience stores). A surprising number of Americans live in these food deserts and do not own a car that would allow them to drive to buy healthier (and cheaper) food. For example, 43,000 people in Houston, Texas lack a car and live over a half mile from a grocery store. The food sold at these places tends to be more expensive than the food available at a grocery store and they tend to be high calorie, low-nutrient foods.

These two factors help explain the seeming paradox of an obesity epidemic among hungry people: people have easier access to high calorie foods that have low nutritional value. Hence, people tend to be overweight while also being malnourished. Now that the nature of the problem has been discussed, I now turn to the matter of obligations to others.

On the face of it, the main issue regarding obligations to the hungry would seem to focus on whether or not there is an obligation to provide people with food. This can be broken down into two sub-categories. First, whether or not there is a collective obligation to provide hungry citizens with food via the machinery of the state (in this case, SNAP). Second, whether or not there is an obligation on the part of better-off citizens to provide food to their hungry fellow citizens.

Arguing that the state has such an obligation is fairly straightforward. A basic obligation of the state is to provide for the good of the people and to protect them from harm. While the traditional focus is on the state providing military and police forces, this would certainly seem to extend to protecting citizens from starving.

A utilitarian argument can also be advanced in favor of this obligation: helping to feed millions of citizens creates more utility than disutility. Part of this is the obvious fact that people are happier when they have food to eat. Part of this is the less obvious fact that when people get hungry enough, open rebellion seems better than starving to death—so feeding the poor helps maintain social stability.

One stock objection against this view is to contend that providing such support creates a culture of dependence that encourages people to stay poor. The obvious counter to this is that, as noted above, those receiving the aid are mostly people who are seniors, disabled or children—people who should not be expected to labor to survive. Also, as noted above, two thirds of the families that received SNAP have at least one working adult. People are not on SNAP because they turn down opportunities—they are on SNAP because of the lack of opportunities.

The matter becomes rather more controversial when the issue switches to whether or not better off individuals are obligated to assist their fellow citizens. This, of course, means apart from paying taxes that help fund SNAP. Such assistance might involve donating money, time or food.

Intuitively, people tend to think that assisting others in this way is a nice thing to do and worthy of praise. However, people also tend to think that there is no obligation to do this and that someone who does not assist others in this way is not a bad person. This does have some appeal—after all, being bad is typically seen as an active thing rather than merely not doing good things.

Turning back to the general arguments for obligations to others, there are religious injunctions to feed the hungry (which explains why American churches are typically on the front line in the war against hunger), and it is easy to reverse the situation: if I were hungry, I would want my fellow citizens to help me. As such, I should help them when I am well off.

The utilitarian argument also applies here: a person who gives a little to help the hungry will incur a small cost (but might gain in happiness) but it will yield greater happiness on the part of the recipients who now have something to eat. As such, the utilitarian argument would seem to nicely ground this obligation. Of course, there is the stock objection about building dependence.

Rational self-interest would also seem to provide a reason to provide such aid—there are plenty of selfish reasons to do so, not the least of which is gaining a good reputation and helping to keep the hungry from revolting.

The debt argument might work here as well—if a person has benefited from the assistance of others, then she would be obligated to repay that debt. However, a person could contend that as long as they have not received food from others when hungry, he owes nothing.

The argument from virtue obviously applies here: the virtue of generosity obligates a person to give to others in need. This, and the religious injunction, would seem to be the truest forms of actual obligation—as opposed to merely doing it from self-interest or for utility.

Digging deeper, there is also another issue. As noted above, people are hungry primarily because they are not earning enough to purchase adequate food. One reason for this is that wages have consistently declined for most Americans, although the profits of businesses have steadily increased. As such, the United States is the wealthiest country in the world, yet has many very poor people. This raises the moral issue of whether or not employers are obligated to pay a living wage—a wage that would enable a person to purchase food on that salary without requiring the assistance of the state or others.

Businesses obviously have a strong self-interest in not doing so—lower wages mean greater profits and shifting the cost to other people (taxpayers and those who contribute to food pantries) means that their workers survive despite the lack of a living wage. However, there is still the moral question of whether or not they have an obligation to provide such a living wage.

The religious injunctions would seem to apply to employers that accept these specific faiths—and companies that wish to claim they are religious should be obligated to act the part. However, secular companies can easily claim exemption.

Reversing the situation would also apply: presumably those running businesses would not want to be so poorly paid. Of course, they would probably claim that as job creators there is a relevant difference.

The utilitarian argument does involve some complexities. After all, there can be very good utilitarian arguments for allowing some people to suffer so as to produce greater utility for others—so a case could be made that the utility generated outweighs the disutility of the low pay. However, the opposite sort of argument can also be made.

The debt argument would also apply. If corporations are people or at least are fictions that are run by people, then they would have a debt to the others that make civilization possible. As such, they should pay back this debt, perhaps in the form of decent wages.

The virtues of fairness and generosity would seem to obligate employers to pay employees fairly and this should be a living wage, at least in many cases. If corporations are people, then they should surely be held to the same obligations as actual people.

Thus, it would seem that there are good reasons to accept that we are obligated to help others.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Acquired Savantism & Innate Ideas

Portrait of Socrates. Marble, Roman artwork (1...

. (Photo credit: Wikipedia)

One classic philosophical dispute is the battle over innate ideas. An innate idea, as the name suggests, is an idea that is not acquired by experience but is “built into” the mind. As might be imagined, the specific nature and content of such ideas vary considerably among the philosophers who accept them. Leibniz, for example, takes God to be the author of the innate ideas that exist within the monads. Other thinkers, for example, accept that humans have an innate concept of beauty that is the product of evolution.

Over the centuries, philosophers have advanced various arguments for (and against) innate ideas. For example, some take Plato’s Meno as a rather early argument for innate ideas. In the Meno, Socrates claims to show that Meno’s servant knows geometry, despite the (alleged) fact that the servant never learned geometry. Other philosophers have argued that there must be innate ideas in order for the mind to “process” information coming in from the senses. To use a modern analogy, just as a smart phone needs software to make the camera function, the brain would need to have built in ideas in order to process the sensory data coming in via the optic nerve.

Other philosophers, such as John Locke, have been rather critical of the idea of innate ideas in general. Others have been critical of specific forms of innate ideas—the idea that God is the cause of innate ideas is, as might be suspected, not very popular among philosophers today.

Interestingly enough, there is some contemporary evidence for innate ideas. In his August 2014 Scientific American article “Accidental Genius”, Darold A. Treffert advances what can be seen as a 21st century version of the Meno. Investigating the matter of “accidental geniuses” (people who become savants as the result of an accident, such as a brain injury), researchers found that they could create “instant savants” by the use using brain stimulation. These instant savants were able to solve a mathematical puzzle that they could not solve without the stimulation. Treffert asserts that this ability to solve the puzzle was due to the fact that they “’know things’ innately they were never taught.” To provide additional support for his claim, Treffert gave the example of a savant sculptor, Clemons, who “had no formal training in art but knew instinctively how to produce an armature, the frame for the sculpture, to enable his pieces to show horse in motion.” Treffert goes on to explicitly reject the “blank slate” notion (which was made famous by John Locke) in favor of the notion that the “brain might come loaded with a set of innate predispositions for processing what it sees or for understanding the ‘rules’ of music art or mathematics.” While this explanation is certainly appealing, it is well worth considering alternative explanations.

One stock objection to this sort of argument is the same sort of argument used against claims about past life experiences. When it is claimed that a person had a past life on the basis that the person knows about things she would not normally know, the easy and obvious reply is that the person learned about these things through perfectly mundane means. In the case of alleged innate ideas, the easy and obvious reply is that the person gained the knowledge through experience. This is not to claim that the person in question is engaged in deception—she might not recall the experience that provided the knowledge. For example, the instant savants who solved the puzzle probably had previous puzzle experience and the sculptor might have seen armatures in the past.

Another objection is that an idea might appear to be innate but might actually be a new idea that did not originate directly from a specific experience. To use a concrete example, consider a person who developed a genius for sculpture after a head injury. The person might have an innate idea that allowed him to produce the armature. An alternative explanation is that the person faced the problem regarding his sculpture and developed a solution. The solution turned out to be an armature, because that is what would solve the problem. To use an analogy, someone faced with the problem of driving in a nail might make a hammer but this does not entail that the idea of a hammer is innate. Rather, a hammer like device is what would work in that situation and hence it is what a person would tend to make.

As has always been the case in the debate over innate ideas, the key question is whether the phenomena in question can be explained best by innate ideas or without them.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Worst Thing

Anselm of Canterbury was the first to attempt ...

Anselm of Canterbury (Photo credit: Wikipedia)

It waits somewhere in the dark infinity of time. Perhaps the past. Perhaps the future. Perhaps now. The worst thing.

Whenever something bad happens to me, such as a full quadriceps tendon tear, people always helpfully remark that “it could have been worse.” Some years ago, after that tendon tear, I wrote an essay about this matter which focused on possibility and necessity. That is, whether it could be worse or not. While the tendon tear was perhaps the worst thing to happen to me (as of this writing), I did have some bad things happen this summer and got to hear how things could have been worse. Since it seemed like a fun game, I decided to play along: when lightning took out the pine tree in front of my house I said “why, it could have been worse” and then was hit with inspiration: what would be the worst thing? The thing that which nothing worse can be conceived.

I can say with complete confidence that there must be such a thing. After all, just as there must be a tallest building, there must be the worst thing. But, of course, this would not be much of an essay if I failed to argue for this claim.

Interestingly enough, arguing for the worst thing is rather similar to arguing for the existence of a perfect thing (that is, God). Thomas Aquinas famously made use of his Five Ways to argue for the existence of God and most of these arguments relied on a combination of an infinite regress and a reduction to absurdity. For example, Aquinas argued from the fact that things move to the need for an unmoved mover on the grounds that an infinite regress would arise if everything had to be moved by something else. A regress argument with a reduction to absurdity will serve quite nicely in arguing for the worst thing.

Take any thing. To avoid the usual boring philosophical approach of calling this thing X, I’ll call this thing Troy. If Troy is the worst thing, then the worst thing exists. If Troy is not the worst thing, then there must be another thing that is worse than Troy. That thing, which I will call Sally, is either the worst thing or not. If Sally is the worst thing, then the worst thing exists and is Sally. If it is not Sally, there must be something worse than Sally. This cannot go on to infinity so there must be a thing that is worse than all other things—the worst thing. I’ll call it Dave.

The obvious counter is to throw down the infinity gauntlet: if there is an infinite number of things, there will not be a worst thing. After all, for any thing, there will be an infinite number of other things. As Leibniz claimed, the infinite number cannot be said to be even or odd, therefor in an infinite universe a thing could not be said to be worst.

One might be inclined to reject the infinity gauntlet—after all, even if there is an infinite number of things, each thing would stand in a relation to all other things and there would thus still be a worst thing.

Another obvious counter is to assert that there could be two or more things that are equally bad—that is, identical in their badness. As such, there would not be a worst thing.  A counter to this is to follow Leibniz once again and argue that there could not be two identical things—they would need to differ in some way that would make one worse than the other. This could be countered by asserting that the two might be different, yet equally bad. In this case, the response would be to follow the model used in arguing for the best thing (God) and assert that the worst thing would be worst in every possible respect and hence anything equally as bad would be identical and thus there would be one worst thing, not two. I suppose that this would have some consolation value—it would certainly be a scary universe that had multiple worst things.

Of course, this just shows that there is something that is worse than all other things that happen to be—which leaves open the possibility that it is not the worst thing in another sense of the term. So now I will turn to arguing for the truly worst thing.

Another way to argue for the worst thing is to use the model of St. Anselm’s ontological argument. Very crudely put, the ontological argument works like this: God is that which nothing greater can be conceived. If God only existed as an idea in the mind, a greater being can be conceived, namely God existing for real. Thus, God must exist.

In the case of the worst thing, it would be that which nothing worse can be conceived. If it only existed as an idea in the mind, a worse thing can be conceived, namely the worst thing existing for real. Thus, the worst thing must exist.

Another variant on the ontological argument can also be used here. A stock variant is that since God is perfect, He must exist. This is because if He did not exist, He would not be perfect. But He is, so He must. In the case of the worst thing, the worst thing must exist because it is the worst. This is because if it did not exist, it would not be the worst. But it is, so it does. This worst thing would be the truly worst thing (just as God is supposed to be the best thing).

This approach does, of course, inherit the usual difficulties of an ontological argument as pointed out by Gaunilo and Kant (that existence is not a quality). It would certainly be better for the universe and the folks in it for the critics to be right so that there is no worst thing.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Terraforming Ethics

J’atorg struggled along on his motile pods, wheezing badly as his air sacs fought with the new air. He cursed the humans, invoking the gods of his people. Reflecting, he cursed the humans by invoking their gods. The gods of his people had proven weak: the bipeds had come and were transforming his world into an environment more suitable for themselves, showing their gods were stronger. The humans said it would take a long time for the world to fully change, but J’atorg could already see, taste and smell the differences. He did not know who he hated more: the hard-eyed humans who were destroying his world or the soft-eyed humans who poured forth words about “rights”, “morality” and “lawsuits” while urging patience. He knew that his people would die, aside from those the humans kept as curiosities or preserved to assuage their conscience with cruel pity.

English: Terraforming

English: Terraforming (Photo credit: Wikipedia)

Terraforming has long been a staple in science fiction, though there has been some practical research in more recent years.  In general terms, terraforming is transforming a planet to make it more earthlike. Typically, the main goal of terraforming is to make an alien world suitable for human habitation by altering its ecosystem. Since this process would tend to radically change a world, terraforming does raise ethical concerns.

The morally easiest scenario is one in which a lifeless, uninhabited (including non-living creatures) planet (or moon) is to be terraformed. If Mars is lifeless and uninhabited, it would fall into this category. The reason why this sort of scenario is the morally easiest is that there would be no beings on the world to be impacted by the terraforming. As such, there would be no rights violated, no harms inflicted, etc. As such, terraforming of such a planet would seem to be morally acceptable.

One obvious counter is to argue that a planet has moral status of its own, distinct from that of the sort of beings that might inhabit a world. Intuitively, the burden of proof for this status would rest on those who make this claim since inanimate objects do not seem to be the sort of entities that can be wronged.

A second obvious counter is to argue that an uninhabited world might someday produce inhabitants. After all, the scientific account of life on earth involves life arising from non-life by natural processes. If an uninhabited world is terraformed, the possible inhabitants that might have arisen from the world would never be.

While arguments from potentiality tend to be weak, they are not without their appeal. Naturally, the concern for the world in question would be proportional to how likely it is that it would someday produce inhabitants of its own. If this is unlikely, then the terraforming would be of less moral concern. However, if the world has considerable potential, then the matter is clearly more serious. To reverse the situation, we certainly would not have wanted earth to be transformed by aliens to fit themselves if doing so would have prevented our eventual evolution. As such, to act morally, we would need to treat other worlds as we would have wanted our world to be treated.

The stock counter to such potentiality arguments is that the merely potential does not morally outweigh the actual. This is the sort of view that is used to justify the use of resources now even when doing so will make them unavailable to future generations. This view does, of course, have its own problems and there can be rather serious arguments regarding the status of the potential versus that of the actual.

If a world has life or is otherwise inhabited (I do not want to assume that all inhabitants must be life in our sense of the term), then the morality of terraforming becomes more complicated. After all, the inhabitants of a world would seem likely to have some moral status. Not surprisingly, the ethics of terraforming an inhabited world are very similar to those of altering an environment on earth through development or some other means. Naturally enough, the stock arguments about making species extinct would come into play here as well. As on earth, the more complex the inhabitants, the greater the moral concern—assuming that moral status is linked to complexity. After all, we do not balk at eliminating viruses or bacteria, but are sometimes concerned when higher forms of life are at stake.

If the inhabitants are people (albeit non-human), then the matter is even more complicated and would bring into play the stock arguments about how people should be treated. Despite the ethical similarities, there are some important differences when it comes to terraforming ethics.

One main difference is one of scale: bulldozing a forest to build condos versus changing an entire planet for colonizing. The fact that the entire world is involved would seem to be morally significant—assuming that size matters.

There is also another important difference, namely the fact that the world is a different world. On earth, we can at least present some plausible ownership claim. Asserting ownership over and alien world is rather more problematic, especially if it is already inhabited.

Of course, it can be countered that we are inhabitants of this universe and hence have as good a claim to alien worlds as our own—after all, it is our universe. Also, there are all sorts of clever moral justifications for ownership that people have developed over the centuries and these can be applied to ownership of alien worlds. After all, the moral justifications for taking land from other humans can surely be made to apply to aliens. To be consistent we would have to accept that the same arguments would morally justify aliens doing the same to us, which we might not want to do. Or we could simply go with a galactic state of nature where profit is the measure of right and matters are decided by the space sword. In that case, we must hope that we have the biggest sword or that the aliens have better ethics than we do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Buffer Zones & Consistency

English: United States Supreme Court building ...

(Photo credit: Wikipedia)

In the summer of 2014, the United States Supreme Court struck down the Massachusetts law that forbid protesters from approaching within 35 feet of abortion clinics. The buffer zone law was established in response to episodes of violence. Not surprisingly, the court based its ruling on the First Amendment—such a buffer zone violates the right of free expression of those wishing to protest against abortion or who desire to provide unsought counseling to those seeking abortions.

Though I am a staunch supporter of the freedom of expression, I do recognize that there can be legitimate limits on this freedom—especially when such limits provide protection to the life, liberty and property of others. To use the stock examples, freedom of expression does not permit people to engage in death threats, slander, or panicking people by screaming “fire” in a crowded, non-burning theater.

While I do recognize that the buffer zone does serve a legitimate purpose in enhancing safety, I do agree with the court. The grounds for this agreement is that the harm done to freedom of expression by banning protest in public spaces exceeds the risk of harm caused by allowing such protests. Naturally enough, I do agree that people who engage in threatening behavior can be justly removed—but this is handled by existing laws. That said, I do regard the arguments in favor of the buffer zone as having merit—weighing the freedom of expression against safety concerns is challenging and people of good conscience can disagree in this matter.

One rather interesting fact is that the Supreme Court has its own buffer zone—there is a federal law that bans protesters from the plaza of the court.  Since the plaza is a public space, it would seem analogous to the public space of the sidewalks covered by the Massachusetts law. Given the Supreme Court’s ruling, the principle seems to be that the First Amendment ensures a right to protest in public spaces—even when there is a history of violence and legitimate safety concerns exist. While the law is whatever those with the biggest guns say it is, there is the matter of the ethics of the matter and this is governed by consistent application.

A principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that people are initially morally equal and hence must be treated as such. This requires that moral principles be applied consistently.  Naturally, a person’s actions can affect the initially equality. For example, a person who commits horrible evil deeds would not be morally equal to someone who does predominantly good deeds.

Impartiality is the assumption that moral principles must not be applied with partiality. Inconsistent application would involve non-impartial application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. What counts as a relevant difference in particular cases can be a matter of great controversy. For example, while many people do not think that gender is a relevant difference in terms of how people should be treated other people think it is very important. This assumption requires that principles be applied consistently.

Given that the plaza of the court is a public space analogous to a sidewalk, then if the First Amendment guarantees the right to protest in public spaces of this sort, then the law forbidding protests in the plaza is unconstitutional and must be struck down. To grant protesters access to the sidewalks outside clinics while forbidding them from the public plaza of the court would be an inconsistent application of the principle. But, of course, there is always a way to counter this.

One way to counter this in a principled way is to show that an alleged inconsistency is merely apparent.  One way to do this is by showing that there is a relevant difference in the situation. If the Supreme Court wishes to morally justify their buffer while denying others their buffers, they would need to show a relevant difference that warrants the difference in application. They could, for example, contend that a plaza is relevantly different from a sidewalk. One might point to a size difference and how this impacts protesting. They could also contend that government property is exempt from the law (much like certain state legislatures ban the public from bringing guns into the legislature building even while passing laws allowing people to bring guns into places where other people work)—but they would need to ground the exemption.

My own view, obviously enough, is that there is no relevant difference between the scenarios: if the First Amendment applies to the public spaces around private property, it also applies to the public spaces around state property (which is the most public of public property).

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

The Sharing Economy I: Regulation

Airbnb logo

Airbnb logo (Photo credit: Wikipedia)

The rising success of companies such as Airbnd and Uber have created considerable interest in what has been called the sharing economy. The core idea behind the sharing economy is an old one: people provide goods and services as individuals rather than acting as employees or businesses. One classic example of this is paying a neighborhood kid who mows lawns or babysits. Another classic example is paying a friend’s gas money for a ride to the airport. The new version of the sharing economy does make some changes to the traditional model. The fundamental difference is that the old sharing economy was typically an informal word-of-mouth system while the new sharing economy is organized by companies. As an example of the old sharing economy, your neighbor might have told you about the teenager she hired to babysit her kids or to mow her lawn (back in the day when this was an accepted practice). As an example of the new sharing economy, you might use the Uber app to get a chipper soccer mom to give you a ride to the airport in her mini-van. Unlike the old sharing economy in which your neighbor (probably) did not take a cut for connecting you to a sitter or mower, the companies that connect people get a cut of the proceeds—which can be justified by the services they provide.

The new sharing economy has received considerable praise, mainly due to the fact that it makes it easier for people to make money in what are still challenging economic times. For example, a person who would be hard pressed to get a job as a professional cabbie can easily drive for Uber. However, it has also drawn considerable criticism.

As might be suspected, some of the most vocal critics of the sharing economy are the people whose livelihoods and profits are threatened by this economy. For example, Uber’s conflicts with taxi services routinely make the news. Some people dismiss these criticisms as the usual lamentations of obsolete industries while others regard the criticisms as having legitimacy. In any case, there is certainly considerable controversy regarding this new sharing economy.

One point of concern is regulation. As it now stands, the sharing economy is exploiting the loopholes that exist in the informal economy (which is regulated far less than the formal economy). For example, professional cab drivers are subject to a fairly extensive set of regulations (and expenses, such as insurance costs) while an Uber driver is not. As another example, the hotel industry is regulated while services like Airbnb currently lack such regulations regarding things such as safety and handicap access.

Some proponents of the free market might praise the limited (or nonexistent) regulation and this praise might have some merit—after all, it has long been contended that regulation impedes profits. However, there are at least two legitimate concerns here.

One is, obviously enough, the matter of fairness. If taxi drivers and hotels are subject to strict regulations that also involve additional costs, then it hardly seems fair that companies like Uber and Aibnd can offer the same services while evading these regulations. One obvious option is to impose them on the sharing economy. Another obvious option is to reduce regulations on the traditional economy. In any case, fairness would seem to require comparable regulation.

The second is the matter of safety and other concerns of the public good. While some regulations might be seen as burdensome, others clearly exist to protect the public from legitimate harms. For example, hotels are held to certain standards of cleanliness and safety. As another example, taxi companies are subject to regulations aimed at protecting the public. If the new sharing economy puts people at risk in similar ways, then it seems reasonable to impose comparable regulations on the sharing economy. After all, whether you are getting a hotel room or going through Airbnb, you should have a reasonable expectation that you will not perish in a fire due to safety issues.

It might be countered that the new sharing economy should still fall under the standards of the old sharing economy. For example, if I ask a friend to take me to the airport and she has an awful car and is a terrible driver, it is hardly the business of the state to regulate my choice (although the state would have the right to address any actual traffic violations). As another example, if I crash on someone’s couch for the night, it is hardly the business of the state to make sure that the couch is properly cleaned and that the house is suitable (although it would need to be up to code).

While this does have some appeal, there are two main arguments against this approach. The first is that the informal economy is largely unregulated because it is just that—informal. Once a company like Uber or Airbnd gets into the picture, the economy has become formalized—there is now a central company that is organizing things. This allows a practical means of regulating what is now commercial activity. The second is the matter of scale. When the informal economy is relatively small, the cost and difficulty of regulating for the public good can be prohibitive. For example, policing neighborhood babysitters or people who give the occasional ride to friends and get gas money for doing so would impose a high cost for a little return in public good. However, when an aspect of the informal economy gets organized by a company and greatly expands in size, then there is more at stake and hence paying the cost of regulating for the public good becomes viable. For example, regulating people occasionally giving friends or associates rides is one thing (a silly thing), but regulating large numbers of people driving vehicles for Uber is quite another matter.

One area that is going to be a matter of considerable controversy is that of discrimination. If Bob does not want to share a ride with a white colleague or give a handicapped associate a lift, then that is Bob’s right.  After all, a citizen has every right to be biased. But, it gets rather more complicated if Bob is driving for Uber—after all, discrimination does harm to the public and the public might have a stake in preventing Uber Bob from discriminating. Similary, if Bob does not want his Latino friend crashing on his couch because he thinks Latinos are thieves, that is Bob’s right (the right of being a jerk to one’s friends). But, if Bob is renting out a room through Airbnd, then this could be a matter of legitimate public concern.

It might be countered that people “freedriving” or “freerenting” for the sharing companies still retain the right to discriminate since they are acting as individuals, albeit under the auspices of a company. That does have considerable appeal, especially since the people driving or renting are not actually employees of these companies. The company is just assisting people to exchange services and, it could be claimed, is no more accountable than a newspaper that has a “for sale” or “help wanted” section. Obviously enough, companies are generally going to want to avoid being associated with discrimination and hence they will probably engage in some degree of self-policing to avoid PR nightmares (or will do so if they are sensible or ethical). However, there is clearly an important issue here regarding whether or not laws against discrimination should be applicable to individuals who are involved with the sharing economy companies. The somewhat fuzzy status of those providing services does create a legitimate problem. As noted above, on one hand they are still just individuals using the service to connect to others. On the other hand, this service does seem to bring them into more of a formal business situation which is subject to such laws.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Ethics & Free Will

Conscience and law

Conscience and law (Photo credit: Wikipedia)

Azim Shariff and Kathleen Vohs recently had their article, “What Happens to a Society That Does Not Believe in Free Will”, published in Scientific American. This article considers the causal impact of a disbelief in free will with a specific focus on law and ethics.

Philosophers have long addressed the general problem of free will as well as the specific connection between free will and ethics. Not surprisingly, studies conducted to determine the impact of disbelief in free will have the results that philosophers have long predicted.

One impact is that when people have doubts about free will they tend to have less support for retributive punishment. Retributive punishment, as the name indicates, is punishment aimed at making a person suffer for her misdeeds. Doubt in free will did not negatively impact a person’s support for punishment aimed at deterrence or rehabilitation.

While the authors do consider one reason for this, namely that those who doubt free will would regard wrongdoers as analogous to harmful natural phenomenon that need to dealt with rather than subject to vengeance, this view also matches a common view about moral accountability. To be specific, moral (and legal) accountability is generally proportional to the control a person has over events. To use a concrete example, consider the difference between these two cases. In the first case, Sally is driving well above the speed limit and is busy texting and sipping her latte. She doesn’t see the crossing guard frantically waving his sign and runs over the children in the cross walk. In case two, Jane is driving the speed limit and children suddenly run directly in front of her car. She brakes and swerves immediately, but she hits the children. Intuitively, Sally has acted in a way that was morally wrong—she should have been going the speed limit and she should have been paying attention. Jane, though she hit the children, did not act wrongly—she could not have avoided the children and hence is not morally responsible.

For those who doubt free will, every case is like Jane’s case: for the determinist, every action is determined and a person could not have chosen to do other than she did. On this view, while Jane’s accident seems unavoidable, so was Sally’s accident: Sally could not have done other than she did. As such, Sally is no more morally accountable than Jane. For someone who believes this, inflicting retributive punishment on Sally would be no more reasonable than seeking vengeance against Jane.

However, it would seem to make sense to punish Sally to deter others and to rehabilitate Sally so she will drive the speed limit and pay attention in the future. Of course, if these is no free will, then we would not chose to punish Sally, she would not chose to behave better and people would not decide to learn from her lesson. Events would happen as determined—she would be punished or not. She would do it again or not. Other people would do the same thing or not. Naturally enough, to speak of what we should decide to do in regards to punishments would seem to assume that we can chose—that is, that we have some degree of free will.

A second impact that Shariff and Vohs noted was that a person who doubts free will tends to behave worse than a person who does not have such a skeptical view. One specific area in which behavior worsens is that such skepticism seems to incline people to be more willing to harm others. Another specific area is that such skepticism also inclines others to lie or cheat. In general, the impact seems to be that the skepticism reduces a person’s willingness (or capacity) to resist impulsive reactions in favor of greater restraint and better behavior.

Once again, this certainly makes sense. Going back to the examples of Sally and Jane, Sally (unless she is a moral monster) would most likely feel remorse and guilt for hurting the children. Jane, though she would surely feel badly, would not feel moral guilt. This would certainly be reasonable: a person who hurts others should feel guilt if she could have done otherwise but should not feel moral guilt if she could not have done otherwise (although she certainly should feel sympathy). If someone doubts free will, then she will regard her own actions as being out of her control: she is not choosing to lie, or cheat or hurt others—these events are just happening. People might be hurt, but this is like a tree falling on them—it just happens. Interestingly, these studies show that people are consistent in applying the implications of their skepticism in regards to moral (and legal) accountability.

One rather important point is to consider what view we should have regarding free will. I take a practical view of this matter and believe in free will. As I see it, if I am right, then I am…right. If I am wrong, then I could not believe otherwise. So, choosing to believe I can choose is the rational choice: I am right or I am not at fault for being wrong.

I do agree with Kant that we cannot prove that we have free will. He believed that the best science of his day was deterministic and that the matter of free will was beyond our epistemic abilities. While science has marched on since Kant, free will is still unprovable. After all, deterministic, random and free-will universes would all seem the same to the people in them. Crudely put, there are no observations that would establish or disprove metaphysical free will. There are, of course, observations that can indicate that we are not free in certain respects—but completely disproving (or proving) free will would seem to beyond our abilities—as Kant contended.

Kant had a fairly practical solution: he argued that although free will cannot be proven, it is necessary for ethics. So, crudely put, if we want to have ethics (which we do), then we need to accept the existence of free will on moral grounds. The experiments described by Shariff and Vohs seems to support Kant: when people doubt free will, this has an impact on their ethics.

One aspect of this can be seen as positive—determining the extent to which people are in control of their actions is an important part of determining what is and is not a just punishment. After all, we do not want to inflict retribution on people who could not have done otherwise or, at the very least, we would want relevant circumstances to temper retribution with proper justice.  It also makes more sense to focus on deterrence and rehabilitation more than retribution. However just, retribution merely adds more suffering to the world while deterrence and rehabilitation reduces it.

The second aspect of this is negative—skepticism about free will seems to cause people to think that they have a license to do ill, thus leading to worse behavior. That is clearly undesirable. This then, provides an interesting and important challenge: balancing our view of determinism and freedom in order to avoid both unjust punishment and becoming unjust. This, of course, assumes that we have a choice. If we do not, we will just do what we do and giving advice is pointless. As I jokingly tell my students, a determinist giving advice about what we should do is like someone yelling advice to a person falling to certain death—he can yell all he wants about what to do, but it won’t matter.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Checking ‘Check Your Privilege”

Privilege (album)

Privilege (album) (Photo credit: Wikipedia)

As a philosopher, I became familiar with the notion of the modern political concept of privilege as a graduate student—sometimes in classes, but sometimes in being lectured by other students about the matter. Lest anyone think I was engaged in flaunting my privileges, the lectures were always about my general maleness and my general appearance of whiteness (I am actually only mostly white) as opposed to any specific misdeed I had committed as a white-appearing male. I was generally sympathetic to most criticisms of privilege, but I was not particularly happy when people endeavored to use a person’s membership in a privileged class as grounds for rejecting the person’s claims out of hand. Back then, there was no handy phrase to check a member of a privileged class. Fortunately (or unfortunately) such a phrase has emerged, namely “check your privilege!”

The original intent of the phrase is, apparently, to remind a person making a claim on a political (or moral) issue that he is speaking from a position of privilege, such as being a male or straight. While it is most commonly used against members of what can be regarded as the “traditional” privileged classes (males, whites, the wealthy, etc.) it can also be employed against people of classes that are either privileged relative to the classes they are commenting on or in different non-privileged class. For example, a Latina might be told to “check her privilege” for making a remark about black women. In this case, the idea is to remind the transgressors that different oppressed groups experience their oppression differently.

As might be imagined, many people take issue with being told to “check their privilege!” in some cases, this can be mere annoyance with the phrase. This annoyance can have some foundation, given that the phrase can have a hostile connotation and the fact that it can seem like a dismissive reply.

In other cases, the use of the phrase can be taken as an attempt to silence someone. Roughly put, “check your privilege” can be interpreted as “stop talking” or even as “you are wrong because you belong to a privileged class.” In some cases, people are interpreting the use incorrectly—but in other cases they are interpreting quite correctly.

Thus, the phrase can be seen as having two main functions (in addition to its dramatic and rhetorical use). One is as a reminder, the other is as an attack. I will consider each of these in the context of critical thinking.

The reminder function of the phrase does have legitimacy in that it is grounded in a real need to remind people of two common cognitive biases, namely in group bias and attribution error. In group bias is the name for the tendency people have to easily form negative opinions of people who are not in their group (in this case, an allegedly privileged class). This bias leads people to regard members of their own group more positively (attributing positive qualities and assessments to their group members) while regarding members of other groups more negatively (attributing negative qualities and assessments to these others). For example, a rich person might regard other rich people as being hardworking while regarding poor people as lazy, thieving and inclined to use drugs. As another example, a woman might regard her fellow women as kind and altruistic while regarding men as violent, sex-crazed and selfish.

Given the power of this bias, it is certainly worth reminding people of it—especially when their remarks show signs that this bias is likely to be in effect. Of course, telling someone to “check their privilege” might not be the nicest way to engage in the discussion and it is less specific than “consider that you might be influenced by in group bias.”

Attribution error is a bias that leads people to tend to fail to appreciate that other people are as constrained by events and circumstances as they would be if they were in their situation. For example, consider a discussion about requiring voters to have a photo ID, reducing the number of polling stations and reducing their hours. A person who is somewhat well off might express the view that getting an ID and driving across town to a polling station on his lunch break is no problem—because it is no problem for him. However, for someone who does not have a car and is very poor, these can be serious obstacles. As another example, someone who is rich might express the view that the poor should not be helped because they are obviously poor because they are lazy (and not because of the circumstances they face, such as being born into poverty).

Given the power of this bias, a person who seems to making this error should certainly be reminded of this possibility. But, of course, telling the person to “check their privilege” might not be the most diplomatic way to engage and it is certainly less specific than pointing out the likely error. But, given the limits of Twitter, it might be a viable option when used in this social media context.

In regards to the second main use, using it to silence a person or to reject the person’s claim would not be justified. While it is legitimate to consider the effects of biases, to reject a person’s claim because of their membership in a specific class would be an ad hominen of some sort.  An ad hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

1. Person A makes claim X.

2. Person B makes an attack on person A.

3. Therefore A’s claim is false.

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

Because of the usage of the “check your privilege” in this role, I’d suggest a minor addition to the ad hominem family, the check your privilege ad hominem:

1. Person A makes claim X.

2. Person B tells A to “check their privilege” based on A’s membership in group G.

3. Therefore A’s claim is false.

This is, obviously enough, bad reasoning.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page