How You Should Vote

http://www.gettyimages.com/detail/142020972

As I write this in early October, Election Day in the United States is about a month away. While most Americans do not vote, there is still in question of how a voter should vote.

While I do have definite opinions about the candidates and issues on the current ballot in my part of Florida, this essay is not aimed at convincing you to vote as I did (via my mail-in ballot). Rather, my goal is to discuss how you should vote in general.

The answer to the question of how you should vote is easy: if you are rational, then you should vote in your self-interest. In the case of a specific candidate, you should vote for the candidate you believe will act in your self-interest. In the case of such things as ballot measures, you should vote for or against based on how you believe it will impact your self-interest. So, roughly put, you should vote for what is best for you.

While this is rather obvious advice, it does bring up two often overlooked concerns. The first is the matter of determining what is actually in your self-interest. The second is determining whether or not your voting decision is in your self-interest. In the case of a candidate, the concern is whether or not the candidate will act in your self-interest. In the case of things like ballot measures, the question is whether or not the measure will be advantageous to your interests or not.

It might be thought that a person just knows what is in her self-interest. Unfortunately, people can be wrong about this. In most cases people just assume that if they want or like something, then it is in their self-interest. But, what a person likes or wants need not be what is best for him. For example, a person might like the idea of cutting school funding without considering how it will impact her family. In contrast, what people do not want or dislike is assumed to be against their self-interest. Obviously, what a person dislikes or does not want might not be bad for her. For example, a person might dislike the idea of an increased minimum wage and vote against it without considering whether it would actually be in their self-interest or not. The take-away is that a person needs to look beyond what he likes or dislikes, wants or does not want in order to determine her actual self-interest.

It is natural to think that of what is in a person’s self interest in rather selfish terms. That is, in terms of what seems to benefit just the person without considering the interests of others. While this is one way to look at self-interest, it is worth considering what might seem to be in the person’s selfish interest could actually be against her self-interest. For example, a business owner might see paying taxes to fund public education as being against her self-interest because it seems to have no direct, selfish benefit to her. However, having educated fellow citizens would seem to be in her self-interest and even in her selfish interest. For example, having the state pay for the education of her workers is advantageous to her—even if she has to contribute a little. As another example, a person might see paying taxes for public health programs and medical aid to foreign countries as against her self-interest because she has her own medical coverage and does not travel to those countries. However, as has been shown with Ebola, public and even world health is in her interest—unless she lives in total isolation. As such, even the selfish should consider whether or not their selfishness in a matter is actually in their self-interest.

It is also worth considering a view of self-interest that is more altruistic. That is, that a person’s interest is not just in her individual advantages but also in the general good. For this sort of person, providing for the common defense and securing the general welfare would be in her self-interest because her self-interest goes beyond just her self.

So, a person should sort out her self-interest and consider that it might not just be a matter of what she likes, wants or sees as in her selfish advantage. The next step is to determine which candidate is most likely to act in her self-interest and which vote on a ballot measure is most likely to serve her self-interest.

Political candidates, obviously enough, try very hard to convince their target voters that they will act in their interest. Those backing ballot measures also do their best to convince voters that voting a certain way is in their self-interest.

However, the evidence is that politicians do not act in the interest of the majority of those who voted for them. Researchers at Princeton and Northwestern conducted a study, “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens”, to determine whether or not politicians acted based on the preferences of the majority. The researchers examined about 1,800 policies and matched them against the preferences expressed by three classes: the average American (50th income percentile), the affluent American (the 90th percentile of income) and the large special interest groups.

The results are hardly surprising: “The central point that emerges from our research is that economic elites and organized groups representing business interests have substantial independent impacts on US government policy, while mass-based interest groups and average citizens have little or no independent influence.” This suggests that voters are rather poor at selecting candidates who will act in their interest (or perhaps that there are no candidates who will do so).

It can be countered that the study just shows that politicians generally act contrary to the preferences of the majority but not that they act contrary to their self-interest. After all, I made the point that what people want (prefer) might not be what is in their self-interest. But, on the face of it, unless what is in the interest of the majority is that the affluent get their way, then it seems that the politicians voters choose generally do not act in the best interest of the voters. This would indicate that voters should pick different candidates.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

A Philosopher’s Blog: 2012-2013

A-Philosopher's-Blog-2012-2013-CoverMy latest book, A Philosopher’s Blog 2012-2013, will be free on Amazon from October 8, 2014 to October 12 2014.

Description: “This book contains select essays from the 2012-2013 postings of A Philosopher’s Blog. The topics covered range from economic justice to defending the humanities, plus some side trips into pain pills and the will.”

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Lawful Good

Paladin II

Paladin II (Photo credit: Wikipedia)

As I have written in other posts on alignments, it is often useful to look at the actual world in terms of the D&D alignment system. In this essay, I will look at the alignment that many players find the most annoying: lawful good (or, as some call it, “awful good”).

Pathfinder, which is a version of the D20 D&D system, presents the alignment as follows:

 A lawful good character believes in honor. A code or faith that she has unshakable belief in likely guides her. She would rather die than betray that faith, and the most extreme followers of this alignment are willing (sometimes even happy) to become martyrs.

A lawful good character at the extreme end of the lawful-chaotic spectrum can seem pitiless. She may become obsessive about delivering justice, thinking nothing of dedicating herself to chasing a wicked dragon across the world or pursuing a devil into Hell. She can come across as a taskmaster, bent upon her aims without swerving, and may see others who are less committed as weak. Though she may seem austere, even harsh, she is always consistent, working from her doctrine or faith. Hers is a world of order, and she obeys superiors and finds it almost impossible to believe there’s any bad in them. She may be more easily duped by such impostors, but in the end she will see justice is done—by her own hand if necessary.

In the fantasy worlds of role-playing games, the exemplar of the lawful good alignment is the paladin. Played properly, a paladin character is a paragon of virtue, a word of righteousness, a defender of the innocent and a pain in the party’s collective ass. This is because the paladin and, to a somewhat lesser extent, all lawful good characters are very strict about being good. They are usually quite willing to impose their goodness on the party, even when doing so means that the party must take more risks, do things the hard way, or give up some gain. For example, lawful good characters always insist on destroying unholy magical items, even when they could be cashed in for stacks of gold.

In terms of actual world moral theories, lawful good tends to closely match virtue theory: the objective is to be a paragon of virtue and all that entails. In actual game play, players tend to (knowingly or unknowingly) embrace the sort of deontology (rules based ethics) made famous by our good dead friend Immanuel Kant. On this sort of view, morality is about duty and obligations, the innate worth of people, and the need to take action because it is right (rather than expedient or prudent). Like Kant, lawful good types tend to be absolutists—there is one and only one correct solution to any moral problem and there are no exceptions. The lawful good types also tend to reject consequentialism—while the consequences of actions are not ignored (except by the most fanatical of the lawful good), what ultimately matters is whether the act is good in and of itself or not.

In the actual world, a significant number of people purport to be lawful good—that is, they claim to be devoted to honor, goodness, and order. Politicians, not surprisingly, often try to cast themselves, their causes and their countries in these terms. As might be suspected, most of those who purport to be good are endeavoring to deceive others or themselves—they mistake their prejudices for goodness and their love of power for a devotion to a just order. While those skilled at deceiving others are dangerous, those who have convinced themselves of their own goodness can be far more dangerous: they are willing to destroy all who oppose them for they believe that those people must be evil.

Fortunately, there are actually some lawful good types in the world. These are the people who sincerely work for just, fair and honorable systems of order, be they nations, legal systems, faiths or organizations. While they can seem a bit fanatical at times, they do not cross over into the evil that serves as a key component of true fanaticism.

Neutral good types tend to see the lawful good types as being too worried about order and obedience. The chaotic good types respect the goodness of the lawful good types, but find their obsession with hierarchy, order and rules oppressive. However, good creatures never willingly and knowingly seriously harm other good creatures. So, while a chaotic good person might be critical of a lawful good organization, she would not try to destroy it.

Chaotic evil types are the antithesis of the lawful good types and they are devoted enemies. The chaotic evil folks hate the order and goodness of the lawful good, although they certainly delight in destroying them.

Neutral evil types are opposed to the goodness of the lawful good, but can be adept at exploiting both the lawful and good aspects of the lawful good. Of course, the selfishly evil need to avoid exposure, since the good will not willingly suffer their presence.

Lawful evil types can often get along with the lawful good types in regards to the cause of order. Both types respect tradition, authority and order—although they do so for very different reasons. Lawful evil types often have compunctions that can make them seem to have some goodness and the lawful good are sometimes willing to see such compunctions as signs of the possibility of redemption. In general, the lawful good and lawful evil are most likely to be willing to work together at the societal level. For example, they might form an alliance against a chaotic evil threat to their nation. Inevitably, though, the lawful good and lawful evil must end up in conflict. Which is as it should be.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Robert Pollack’s The Faith of Biology & The Biology of Faith

Some time ago now, I was sent a review copy of The Faith of Biology & The Biology of Faith: Order, Meaning, and Free Will in Modern Medical Science, by Robert Pollack (Columbia University Press, 2013 – first published 2000). Pollack is a professor of biological sciences at Columbia University, and the main content of the book consists of three public lectures that he gave at Columbia in 1999 as that year’s Schoff Memorial Lectures. His main subject matter for the lectures was the relationship, as he saw it, between science (especially medical research and practice) and religious faith.

As one might hope, given that background, the book is thoughtful, well-written, and accessible. It provides an interesting case study of an eminent scientist’s attempt to reconcile his scientific understanding with his religious faith. It is not, however, philosophically sophisticated, and I doubt that it will persuade anybody who is not already committed to the idea that science and religion are in some way compatible. Indeed, I expect that I could deliver a better argument for their compatibility if called upon to do so.

Refreshingly, Pollack does not fall back on contrived theological arguments. Although his book contains a certain amount of theology, he bases his continued religious belief squarely on the emotional unacceptability of the alternative – all while conceding that science not only provides no good evidence for the existence of a divine intelligence, but actually provides evidence to suggest the opposite.

Strictly speaking, he may be correct that what he calls “matters of personal belief” (he means supernatural or otherworldly beliefs) “cannot finally be tested by science”; there are notoriously many ingenious moves available to protect “personal belief” from empirical refutation. The emphasis should, however, be on the word finally. Someone who is not emotionally committed to a religious worldview may see a great deal in the history and findings of science that at least makes religion (and particular religions) far less psychologically and intellectually attractive than would otherwise be so.

Pollack does not really deny this. On the contrary, he concedes that “[t]he molecular biology of evolution, in particular, has uncovered facts about me and the rest of us… that fit badly, if at all, into my religion’s [i.e. Judaism’s] revelation of meaning.” After some discussion of the detail, he concludes: “These facts from science tell us, in other words, that our species – with all our appreciation of ourselves as unique individuals – is not the creation of design but the result of accumulated errors.”

If he’s right that this is the implication of our scientific knowledge, why not accept it and build our self-understanding from there? Nothing in the evolutionary account contradicts other facts about the world, such as our responsiveness to each other, our status as social animals, our ability to communicate through language and other means, and our capacity to produce art and culture, and to create societies and civilisations. Even if we are “the result of accumulated errors”, I see no reason to deny the possibility of a rich humanistic understanding of ourselves and each other: one that need include nothing that fits badly with robust findings from the physical and biological sciences.

For Pollack, nothing like this would be good enough. For him, the idea that we live in a world without transcendent meaning is emotionally unbearable, so he relies on what he calls “the irrational certainty that there must be meaning and purpose to one’s life despite these data.” He is talking, in this passage, about meanings and purposes that transcend the natural world, including the world of socially constructed institutions.

Having come so far, Pollack then has much to say about how the emotional certainties offered by religious faith might shape biomedical research and medical practice. Some of his recommendations may be defensible on other grounds, while some may not be (for example, he adopts what strikes me as an unnecessarily negative attitude to reproductive cloning and other technologies of genetic choice). Most fundamentally, however, he offers nothing to suggest that religious faith does, after all, fit well with scientific knowledge. The irrational certainty that there must be transcendent meaning, emanating from an “unknowable” divine source, should cut no ice for anyone who approaches the question rationally. The fact, if it is one, that science cannot disprove the existence of such a source in a final, knock-down, logically demonstrative way is scarcely more impressive.

Pollack suggests that religious faith should inform scientific practice, even as scientific understandings inform religious doctrine. But there is nothing in The Faith of Biology & The Biology of Faith to make religious faith attractive to a rational, reasonable, scientifically informed person who currently lacks it. There is not even anything to stand against the claim that scientific information will tend to make religious faith less intellectually attractive to such a person.

The book may give permission to people with similar emotional responses to Pollack’s to continue their religious practice in the face of scientific evidence. It may offer them something of a template for thinking about science in the light of irrational, emotionally driven, faith. Perhaps The Faith of Biology & The Biology of Faith is a success in those terms. But its arguments tend to suggest that scientific findings are more a stumbling block than otherwise to a life of faith. Pollack continues to maintain religious beliefs more despite what he knows from science than because of it.

That’s okay, as long as he does not expect others to follow policy recommendations based on his faith position. Meanwhile, The Faith of Biology & The Biology of Faith does little, if anything, to support the accommodationist position that religion and science are fully compatible. A position that it is possible for someone sufficiently emotionally driven to maintain faith despite the scientific evidence is hardly one of full compatibility between religion and science.

Again, that’s okay – Pollack does not really argue otherwise. Still, his book can easily be read against its grain as an example of the contortions needed to maintain serious religious faith while also being well-informed about science. In that respect, it should give religion/science accommodationists pause.

[Psst... My Amazon author page.]

Gaming Newcomb’s Paradox III: What You Actually Decide

Robert Nozick

Robert Nozick (Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

In this essay I will present the game that creates the paradox and then discuss a specific aspect of Nozick’s version, namely his stipulation regarding the effect of how the player of the game actually decides.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to Nozick’s condition that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

This stipulation provides some insight into how the Predictor’s prediction ability is supposed to work. This is important because the workings of the Predictor’s ability to predict are, as I argued in my previous essay, rather significant in sorting out how one should decide.

The stipulation mainly serves to indicate how the Predicator’s ability does not work. First, it would seem to indicate that the Predictor does not rely on time travel—that is, it does not go forward in time to observe the decision and then travel back to place (or not place) the money in the box. After all, the prediction in this case would be explained in terms of what the player decided to do. This still leaves it open for the Predictor to visit (or observe) a possible future (or, more accurately, a possible world that is running ahead of the actual world in its time) since the possible future does not reveal what the player actually decides, just what she decides in that possible future. Second, this would seem to indicate that the Predictor is not able to “see” the actual future (perhaps by being able to perceive all of time “at once” rather than linearly as humans do). After all, in this case it would be predicting based on what the player actually decided. Third, this would also rule out any form of backwards causation in which the actual choice was the cause of the prediction. While there are, perhaps, other specific possibilities that are also eliminated, the gist is that the Predictor has to, by Nozick’s stipulation, be limited to information available at the time of the prediction and not information from the future. There are a multitude of possibilities here.

One possibility is that the Predictor is telepathic and can predict based on what it reads regarding the player’s intentions at the time of the prediction. In this case, the best approach would be for the player to think that she will take one box, and then after the prediction is made, take both. Or, alternatively, use some sort of drugs or technology to “trick” the Predictor. The success of this strategy would depend on how well the player can fool the Predictor. If the Predictor cannot be fooled or is unlikely to be fooled then the smart strategy would be to intend to take box B and then just take box B. After all, if the Predictor cannot be fooled, then box B will be empty if the player intends on taking both.

Another possibility is that the Predictor is a researcher—it gathers as much information as it can about the player and makes a shrewd guess based on that information (which might include what the player has written about the paradox). Since Nozick stipulates that the Predictor is “almost certainly” right, the Predictor would need to be an amazing researcher. In this case, the player’s only way to mislead the Predictor is to determine its research methods and try to “game” it so the Predictor will predict that she will just take B, then actually decide to take both. But, once again, the Predictor is stipulated to be “almost certainly” right—so it would seem that the player should just take B. If B is empty, then the Predictor got it wrong, which would “almost certainly” not happen. Of course, it could be contended that since the player does not know how the Predictor will predict based on its research (the player might not know what she will do), then the player should take both. This, of course, assumes that the Predictor has a reasonable chance of being wrong—contrary to the stipulation.

A third possibility is that the Predictor predicts in virtue of its understanding of what it takes to be a determinist system. Alternatively, the system might be a random system, but one that has probabilities. In either case, the Predictor uses the data available to it at the time and then “does the math” to predict what the player will decide.

If the world really is deterministic, then the Predictor could be wrong if it is determined to make an error in its “math.” So, the player would need to predict how likely this is and then act accordingly. But, of course, the player will simply act as she is determined to act. If the world is probabilistic, then the player would need to estimate the probability that the Predictor will get it right. But, it is stipulated that the Predictor is “almost certainly” right so any strategy used by the player to get one over on the Predictor will “almost certainly” fail, so the player should take box B. Of course, the player will do what “the dice say” and the choice is not a “true” choice.

If the world is one with some sort of metaphysical free will that is in principle unpredictable, then the player’s actual choice would, in principle, be unpredictable. But, of course, this directly violates the stipulation that the Predictor is “almost certainly” right. If the player’s choice is truly unpredictable, then the Predictor might make a shrewd/educated guess, but it would not be “almost certainly” right. In that case, the player could make a rational case for taking both—based on the estimate of how likely it is that the Predictor got it wrong. But this would be a different game, one in which the Predictor is not “almost certainly” right.

This discussion seems to nicely show that the stipulation that “what you actually decide to do is not part of the explanation of why he made the prediction he made” is a red herring. Given the stipulation that the Predictor is “almost certainly” right, it does not really matter how its predictions are explained. The stipulation that what the player actually decides is not part of the explanation simply serves to mislead by creating the false impression that there is a way to “beat” the Predictor by actually deciding to take both boxes and gambling that it has predicted the player will just take B.  As such, the paradox seems to be dissolved—it is the result of some people being misled by one stipulation and not realizing that the stipulation that the Predictor is “almost certainly” right makes the other irrelevant.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Gaming Newcomb’s Paradox II: Game Mechanics

La bildo estas kopiita de wikipedia:es. La ori...

(Photo credit: Wikipedia)

Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

As a philosopher, a game master (a person who runs a tabletop role playing game) and an author of game adventures, I am rather fond of puzzles and paradoxes. As a philosopher, I can (like other philosophers) engage in the practice known as “just making stuff up.” As an adventure author, I can do the same—but I need to present the actual mechanics of each problem, puzzle and paradox. For example, a trap description has to specific exactly how the trap works, how it may be overcome and what happens if it is set off. I thought it would be interesting to look at Newcomb’s Paradox from a game master perspective and lay out the possible mechanics for it. But first, I will present the paradox and two stock attempts to solve it.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to laying out some possible mechanics in gamer terms.

One obvious advantage of crafting the mechanics for a game is that the author and the game master know exactly how the mechanic works. That is, she knows the truth of the matter. While the players in role-playing games know the basic rules, they often do not know the full mechanics of a specific challenge, trap or puzzle. Instead, they need to figure out how it works—which often involves falling into spiked pits or being ground up into wizard burger. Fortunately, Newcomb’s Paradox has very simple game mechanics, but many variants.

In game mechanics, the infallible Predictor is easy to model. The game master’s description would be as follows: “have the player character (PC) playing the Predictor’s game make her choice. The Predictor is infallible, so if the player takes box B, she gets the million. If the player takes both, she gets $1,000.” In this case, the right decision is to take B. After all, the Predictor is infallible. So, the solution is easy.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

A less-than infallible Predictor is also easy to model with dice. The description of the Predictor simply specifies the accuracy of its predictions. So, for example: “The Predictor is correct 99% of the time. After the player character makes her choice, roll D100 (generating a number from 1-100). If you roll 100, the Predictor was wrong. If the PC picked just box B, it is empty and she gets nothing because the Predictor predicted she would take both. If she picked both, B is full and she gets $1,001,000 because the Predictor predicted she would just take one. If you roll 1-99, the Predictor was right. If the PC picked box B, she gets $1,000,000. If she takes both, she gets $1,000 since box B is empty.” In this case, the decision is a gambling matter and the right choice can be calculated by considering the chance the Predictor is right and the relative payoffs. Assuming the Predictor is “almost always right” would make choosing only B the rational choice (unless the player absolutely and desperately needs only $1,000), since the player who picks just B will “almost always” get the $1,000,000 rather than nothing while the player who picks both will “almost always” get just $1,000. But, if the Predictor is “almost always wrong” (or even just usually wrong), then taking both would be the better choice. And so on for all the fine nuances of probability. The solution is relatively easy—it just requires doing some math based on the chance the Predictor is correct in its predictions. As such, if the mechanism of the Predicator is specified, there is no paradox and no problem at all. But, of course, in a role-playing game puzzle, the players should not know the mechanism.

If the game master is doing her job, when the players are confronted by the Predictor, they will not know the predictor’s predictive powers (and clever players will suspect some sort of trick or trap). The game master will say something like “after explaining the rules, the strange being says ‘my predictions are nearly always right/always right’ and sets two boxes down in front of you.” Really clever players will, of course, make use of spells, items, psionics or technology (depending on the game) to try to determine what is in the box and the capabilities of the Predictor. Most players will also consider just attacking the Predictor and seeing what sort of loot it has. So, for the game to be played in accord with the original version, the game master will need to provide plausible ways to counter all these efforts so that the players have no idea about the abilities of the Predictor or what is in box B. In some ways, this sort of choice would be similar to Pascal’s famous Wager: one knows that the Predictor will get it right or it won’t. But, in this case, the player has no idea about the odds of the Predictor being right. In this case, from the perspective of the player who is acting in ignorance, taking both boxes yields a 100% chance of getting $1,000 and somewhere between 0 and 100% chance of getting the extra $1,000,000. Taking the B box alone yields a 100% chance of not getting the $1,000 and some chance between 0% and 100% of getting $1,000,000. When acting in ignorance, the safe bet is to take both: the player walks away with at least $1,000. Taking just B is a gamble that might or might not pay off. The player might walk away with nothing or $1,000,000.

But, which choice is rational can depend on many possible factors. For example, suppose the players need $1,000 to buy a weapon they need to defeat the big boss monster in the dungeon, then picking the safe choice would be the smart choice: they can get the weapon for sure. If they need $1,001,000 to buy the weapon, then picking both would also be a smart choice, since that is the only way to get that sum in this game. If they need $1,000,000 to buy the weapon, then there is no rational way to pick between taking one or both, since they have no idea what gives them the best chance of getting at least $1,000,000. Picking both will get them $1,000 but only gets them the $1,000,000 if the Predictor predicted wrong. And they have no idea if it will get it wrong. Picking just B only gets them $1,000,000 if the Predictor predicted correctly. And they have no idea if it will get it right.

In the actual world, a person playing the game with the Predictor would be in the position of the players in the role-playing game: she does not know how likely it is that the Predictor will get it right. If she believes that the Predictor will probably get it wrong, then she would take both. If she thinks it will get it right, she would take just B. Since she cannot pick randomly (in Nozick’s scenario B is empty if the players decides by chance), that option is not available. As such, Newcomb’s Paradox is an epistemic problem: the player does not know the accuracy of the predictions but if she did, she would know how to pick. But, if it is known (or just assumed) the Predictor is infallible or almost always right, then taking B is the smart choice (in general, unless the person absolutely must have $1,000). To the degree that the Predictor can be wrong, taking both becomes the smarter choice (if the Predictor is always wrong, taking both is the best choice). So, there seems to be no paradox here. Unless I have it wrong, which I certainly do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Transhumanism and The Journal of Evolution and Technology

This piece was first published over on the IEET site (and I’ve also just reblogged it at my personal blog, The Hellfire Club). It sets out briefly what The Journal of Evolution and Technology is all about, for those who might be interested.

I’ve had the honor of serving as editor-in-chief of The Journal of Evolution and Technology (henceforth “JET”) since January 2008 – so it’s now approaching seven years! Where did the time go? Having been invited by Kris Notaro to write something about an aspect of transhumanism as it involves me professionally, I’m taking the opportunity to reflect briefly on JET and its mission. We have a great story to tell, and perhaps we should tell it more often.

JET was founded in 1998 as The Journal of Transhumanism, and was originally published by the World Transhumanist Association. In November 2004, it moved under the umbrella of the Institute for Ethics and Emerging Technologies, which, of course, seeks to contribute to understanding of the impact of emerging technologies on individuals and societies. This year, then, we are celebrating the first decade of the journal’s publication by the IEET.

My four predecessors in the editorial chair – Nick Bostrom, Robin Hanson, Mark Walker, and James Hughes – have each contributed in distinguished ways to the transhumanist movement and transhumanist thought. They developed the journal as a leading forum for discussion of the future of the human species and whatever might come after it.

JET is now one of the IEET’s flagship operations. It maintains standards of scholarship, originality, intellectual rigor, and peer review similar to those of well-established academic journals. It differs in its willingness to publish material that comparable journals might consider too radical or speculative. The editors welcome high-quality submissions on a wide range of relevant topics and from almost any academic discipline or interdisciplinary standpoint.

We have a publication schedule that assigns one volume to each year, with an irregular number of issues per volume. Each issue contains a mix of articles, reviews, and sometimes other forms such as symposia and peer commentaries. We publish both regular issues – based on submissions received from time to time – and special issues. The latter may, for example, take the form of edited conference proceedings, or they may result from calls for papers on a designated topics.

Recent special issues have covered such topics as Nietzsche and European posthumanisms, machine intelligence, and basic income guarantee schemes in the context of technological change.

Generally speaking, we publish individual articles as they are received, peer-reviewed, and edited, which allows a quick turnaround from acceptance to publication. With our relatively modest resources and the challenges inherent in a journal with such a wide interdisciplinary agenda, we are sometimes slower than we’d wish in making initial decisions to accept or reject, but we strive to overcome those problems and we give careful attention at all stages to each submission that we receive. We work closely with authors to get published articles into the best possible form to communicate to a highly educated but diverse audience, and we’ve often received grateful thanks for our editorial input. In short, we have much to offer potential contributors who are producing research at the leading edge of transhumanist thought. If that sounds like you, please think of submitting to JET.

Central to our thinking, and implicit in the title “evolution and technology,” is the idea – increasingly familiar and plausible – that the human species stands at the threshold of a new form of evolution. This is very different from the slow Darwinian mechanisms of differential survival and reproduction. It is powered, rather, by new technologies that increasingly work their way inward, transforming human bodies and minds. According to this idea, technology can do more than merely give us tools to manipulate the world around us. It can actually alter us, and not just by shaping our neurological pathways when we learn to handle new tools. Our future may, in part, be the product of emerging technologies of human transformation, ranging from genetic engineering to pharmaceutical cognitive enhancement to such radical possibilities as mind uploading and all that it might imply.

This idea of a technologically mediated process of evolution is, of course, familiar to transhumanists, who envisage (and generally welcome) the emergence of intelligences with greater-than-human physical and cognitive capacities. Even outside the transhumanist movement, however, there’s an increasing familiarity with the general idea of a new kind of evolution, no longer the product of Darwinian mechanisms but driven by technology and deliberate choices.

At the same time, this idea, in all its forms, remains controversial. Even if we grant it our broad acceptance there remains much to debate. It is unclear just how the process might be manifested in the years to come, just where it might take us or our successors, and what downsides there might be. No serious person should doubt that there will be risks, possibly on a global scale, in any path of transition from human to posthuman intelligence.

The idea of technologically mediated evolution, perhaps with a great transition from human to posthuman, merits careful study from all available viewpoints. Among writers and thinkers who take the idea seriously, there are bound to be disagreements. To what extent is the process already happening? If it accelerates or continues over a vast span of time, will this be a good thing or a bad thing – or is it a phenomenon that resists moral evaluation? What visions of the human or posthuman future are really plausible: for example, does the idea of mind uploading make good sense when subjected to scientific and logical scrutiny? Reasonable answers to such questions range from radical transhumanist visions of sweeping, rapid, entirely desirable change to various kinds of skepticism, caution, or concern.

JET welcomes a spectrum of views on all this, and we have been willing to publish intellectually serious critiques of transhumanist views alongside radical manifestos by transhumanists. We are unusual, though, in providing a forum for radical proponents of new technology to develop their visions in detail, and with a rigor seldom found elsewhere. Their ideas are then available in their strongest form for scrutiny from admirers and critics alike.

As I said at the start, there’s a great story to tell about JET – the journal has a rich history and exciting prospects for the future. If you’re not familiar with what we do, please check us out!

[My Amazon author page.]

Gaming Newcomb’s Paradox I: Problem Solved

Billy Jack

Billy Jack (Photo credit: Wikipedia)

One of the many annoying decision theory puzzles is Newcomb’s Paradox. The paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.

The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000.  Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.

The following standard chart shows the possible results:

 

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

 

This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predicator has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).

The second stock solution is that the best choice is to take B. Given the assumption that the Predicator is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000.  If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B.

Gamers of the sort who play Pathfinder, D&D and other such role playing games know how to properly solve this paradox. The Predictor has at least $1,001,000 on hand (probably more, since it will apparently play the game with anyone) and is worth experience points (everything is worth XP). The description just specifies its predictive abilities for the game and no combat abilities are mentioned. So, the solution is to beat down the Predictor, loot it and divide up the money and experience points. It is kind of a jerk when it comes to this game, so there is not really much of a moral concern here.

It might be claimed that the Predictor could not be defeated because of its predictive powers. However, knowing what someone is going to do and being able to do something about it are two very different matters. This is nicely illustrated by the film Billy Jack:

 

[Billy Jack is surrounded by Posner's thugs]

Mr. Posner: You really think those Green Beret Karate tricks are gonna help you against all these boys?

Billy Jack: Well, it doesn’t look to me like I really have any choice now, does it?

Mr. Posner: [laughing] That’s right, you don’t.

Billy Jack: You know what I think I’m gonna do then? Just for the hell of it?

Mr. Posner: Tell me.

Billy Jack: I’m gonna take this right foot, and I’m gonna whop you on that side of your face…

[points to Posner's right cheek]

Billy Jack: …and you wanna know something? There’s not a damn thing you’re gonna be able to do about it.

Mr. Posner: Really?

Billy Jack: Really.

[kicks Posner's right cheek, sending him to the ground]

 

So, unless the Predictor also has exceptional combat abilities, the rational solution is the classic “shoot and loot” or “stab and grab.” Problem solved.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Assessment, Metrics & Rankings

Having been in academics for quite some time, I have seen fads come, go and stick. A recent fad is the obsession with assessment. As with many such things, assessment arrived with various acronyms and buzz words. Those more cynical than I would say that all acronyms of administrative origin (AAO) amount to B.S. But I would not say such a thing. I do, however, have some concern with the obsession with assessment.

One obvious point of concern was succinctly put by a fellow philosopher: “you don’t fatten the pig by weighing it.” The criticism behind this homespun remark is that time spent on assessment is time that is taken away from the actual core function of education, namely education. At the K-12 level, the burden of assessment and evaluation has become quite onerous. At the higher education level, the burden is not as great—but considerable time is spent on such matters.

One reply to this concern is that assessment is valuable and necessary: if the effectiveness (or ineffectiveness) of education is not assessed, then there would be no way of knowing what is working and what is not. The obvious counter to this is that educators did quite well in assessing their efforts before the rise of modern assessment and it has yet to be shown that these efforts have actually improved education.

Another obvious concern is that in addition to the time spent by faculty on assessment, a bureaucracy of assessment has been created. Some schools have entire offices devoted to assessment complete with staff and administrators. While only the hard-hearted would begrudge someone employment in these tough times, the toughness of the times should dictate that funding is spent on core functions rather than assessing core functions.

The reply to this is to argue that the assessment is more valuable than the alternative. That is, that funding an assessment office is more important to serving the core mission of the university than more faculty or lower tuition would be. This is, of course, something that would need to be proven.

Another common concern is that assessment is part of the micromanagement of public education being imposed by state legislatures (often by the very same people who speak loudly about getting government off peoples’ backs and protecting businesses from government regulation). This, some critics contend, is all part of a campaign to intentionally discredit and damage public education so as to allow the expansion of for-profit education.

The reply to this is that the state legislature has the right to insist that schools provide evidence that the (ever-decreasing) public money is being well spent. If the legislatures did show true concern for the quality of education and were devoted to public education, this reply would have merit.

Predating the current assessment fad is a much older concern with rankings. Recently I heard a piece on NPR about how Florida’s Board of Governors (the folks who run public education) is pushing Florida public universities to become top ranked schools. There are quite a few rankings, ranging from US News & World Report’s rankings to those of Kiplinger’s. Each of these has a different metric. For example, Kiplinger’s rankings are based on financial assessment. While it is certainly desirable to be well ranked, it is rather ironic that Florida’s public universities are being pushed to rise in the ranks at the same time that the state legislature and governor have consistently cut funding and proven generally hostile to public education. One unfortunate aspect of the ranking obsession is that Florida has adopted a performance based funding system in which the top schools get extra funding while the lower ranked schools get funding cut. Since the schools are competing with each other, some of the schools will end up lower ranked no matter how well they do—so some schools will get cuts, no matter what. This seems to be an odd approach: insisting on improvement while systematically making it harder and harder to improve.

This is also a problem with assessment. To return to that in the closing of this essay, a standard feature of assessment is that the results of the previous assessment must be applied to improve each academic program. That is, there is an assumption of perpetual improvement. Unfortunately, due to budget cuts, there is typically no money available for faculty salary increases. As such, the result is that faculty are supposed to better each year, but get paid less (since inflation and the cost of living increase reduces the value of the salary). As such, the system is such that perpetual improvement of faculty and schools is demanded, but there are no incentives or rewards—other than not getting fired or being the school to get the most cuts. Interestingly, the folks imposing this system are the same folks who tend to claim that taxation and government impositions kill the success of business. That is, if businesses have less money and are regulated too much by the state, then it will be bad. Apparently this view does not extend to education. But there might be an ironic hope: education is being “businessified” and perhaps once the transformation is complete, the universities will get the love showered on corporations.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

 

Outsourcing Education for Savings

Due to a variety of factors, such as reduced state support and ever-expanding administrations, the cost of college in the United States has increased dramatically. In Michigan, a few community colleges have addressed this problem in a way similar to that embraced by businesses: they are outsourcing education. As of this writing, six Michigan community colleges have contracted with EDUStaff—a company that handles recruiting and managing adjunct faculty.

It might be wondered how adding a middleman between the college and the employee would save money. The claim is that since EDUStaff rather than the colleges employs the adjuncts, the colleges save because they do not have to contribute to state pensions for these employees. Michigan Central College claims to have saved $250,000 in a single year.

One concern with this approach is that it is being driven by economic values rather than educational values—that is, the goal is to save money rather than to serve educational goals. If the core function of a college is to educate, then that should be the main focus, though practical economic concerns obviously do matter.

A second concern is that this saving mechanism is being applied to faculty and not staff and administrators. If this approach were a good idea when applied to the core personnel of a college, then it would seem to be an even better idea when applied to the administration and staff. The logical end result would, of course, be a completely outsourced college—but this seems rather absurd.

A third concern is that while avoiding paying pensions results in short term savings, the impact on the adjuncts should be considered. This approach will certainly make working for EDUSTaff less desirable. There is also the fact that the adjuncts will not be building a retirement, which will mean that they will need to draw more heavily on the state (or keep working past when they should retire). As such, the saving for the college comes at the cost of the adjuncts. This, of course, leads to a broader issue, namely that of whether or not employment should include retirement benefits. I would suspect that those who came up with this plan have very good retirement plans—but are clearly quite willing to deny others that benefit. But, if they truly wish to save money, they should give up their retirements as well—why should only faculty do without?

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page