Philosophers! I have a proposition to put to you. Nowadays, we would-be rational members of the public, the intellectually-minded, many citizens, are too in love with the concept of evidence.
Perhaps this surprises you. Maybe you’re thinking: if only! If only enough attention were paid to the massive evidence that dangerous climate change is happening, and that it’s human-triggered. Or: if only the epidemiological evidence marshalled by Wilkinson and Pickett — that more inequality makes society worse in almost every conceivable way — were acted upon.
But actually, even in cases like these, I think that my proposition is still true. Take human-triggered climate-change. Yes, the evidence is strong; but a ‘sceptic’ can always ask for more/better evidence, and thus delay action. There is something stronger than evidence: the concept of precaution.
A sceptic, unconvinced by climate-models, ought to be more cautious than the rest of us about bunging unprecedented amounts of potential-pollutants into the atmosphere! For any uncertainty over the evidence increases our exposure to risk, our fragility.
The climate-sceptics exploit any scientific uncertainty to seek to undermine our confidence in the evidence at our disposal. So far as it goes, this move is correct. But: our exposure to risk is higher, the greater the uncertainty in the science. Uncertainty undermines evidence, but it doesn’t undermine the need for precaution: it underscores it! For remember how high the stakes are.
Think back to the great precedent for the climate issue: the issue of smoking and cancer. For decades, tobacco companies prevaricated against action being taken to stop the epidemic of lung cancer. How? They demanded incontrovertible evidence that smoking caused cancer, and they claimed that until we had such evidence there was nothing to be said against smoking, health-wise. They deliberately evaded the employment of the precautionary principle: which would have warned that, in the absence of such evidence, it was still unsafe to pump your lungs full of smoke and associated chemicals, day in day out, in a manner without natural precedent.
We ought to have relied more on precaution and less on evidence in relation to the smoking-cancer connection. The same goes for climate. (Only: the stakes are much higher, and so the case for precaution is much stronger still.)
And for inequality: Wilkinson and Pickett are merely confirming what we all already ought to have known anyway: that it’s reckless to raise inequality to unprecedented levels, and so to fragilise society itself (for how can one have a society at all, when levels of trust and of commingling are ever-decreasing?).
The same goes for advertising targeted at children: It’s outrageous to demand evidence that dumping potential-toxins into the mental environment actually is dangerous; we just need to exercise precautious care with regard to our children’s fragile, malleable minds.
And for geo-engineering: There’s no evidence at all that geoengineering does any harm, because (thankfully!) it hasn’t been carried out yet: in this case we must be precautious, or risk nemesis, for by the time any evidence was in, it would be too late.
The same goes for GM crops: There is little evidence of harm, to date, from GM, but evidence is the wrong place to look (http://blog.talkingphilosophy.com/?p=8071 ): one ought to focus on the generation of new uncertainties and of untold exposures to grave risk that is inevitably consequent upon taking genes from fish and putting them into tomatoes, or on creating ‘terminator’ genes, etc. . The absence of evidence that GM is harmful must not be confused with evidence of absence of potential harm from GM. We lack the latter, and thus we are direly exposed to the risk of what my philosophical colleague Nassim Taleb (see http://www.fooledbyrandomness.com/pp2.pdf for our joint work in this area) calls a ‘black swan’ event. A massive known or even unknown unknown.
Our love-affair with science, that I’ve criticised previously on this blog (see e.g. http://blog.talkingphilosophy.com/?p=8071 ), is at the root of this. Science-worship, scientism, is responsible for the extreme privileging of evidence over other things that are often even more important. So: let’s end our irrational, dogmatic love-affair with evidence. Yes, being ‘evidence-based’ is usually (though not always!) better than nothing. But there’s usually, when the stakes are highest, something better still: being precautious. (And what’s more: being precautious makes it easier to win, and quicker.)
To end with, here are a couple of my favourite quotes from Wittgenstein, on topic:
1) Science: enrichment and impoverishment. The one method elbows all others aside. Compared with this they all seem paltry, preliminary stages at best. [Wittgenstein, Culture and Value p.69]
2) “Our craving for generality has [as one key] source … our preoccupation with the method of science. I mean the method of reducing the explanation of natural phenomena to the smallest possible number of primitive natural laws; and, in mathematics, of unifying the treatment of different topics by using a generalization. Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer in the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. I want to say here that it can never be our job to reduce anything to anything, or to explain anything. Philosophy really is “purely descriptive.”” – Wittgenstein, Blue and Brown Books p.23.
I’ll be elaborating on these quotes, and on the case made here, in opening and closing plenaries at a Conference in Oxford this Saturday, in case anyone happens to be in the area… http://www.stx.ox.ac.uk/happ/events/wittgenstein-and-physics-one-day-conference
Meanwhile, thanks for your attention…
Category Archives: Philosophy
Philosophers! I have a proposition to put to you. Nowadays, we would-be rational members of the public, the intellectually-minded, many citizens, are too in love with the concept of evidence.
One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior.
Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults.
Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward.
Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense.
It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter.
Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time.
Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.
My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality. Roughly put, this is the ability to get how things work and thus make reasonably accurate predictions. This ability is rather useful: getting how things work is a big step on the road to success.
Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality. For example, if your Call of Cthulhu character were trying to avoid being spotted by the cultists of Hastur as she spies on them, you would need to roll under your Sneak skill on percentile dice. As another example, if your D-7 battle cruiser were firing phasers and disruptors at a Kzinti strike cruiser, you would roll dice and consult various charts to see what happened. Video games also include the digital equivalent of dice. For example, if you are playing World of Warcraft, the damage done by a spell or a weapon will be random.
Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.
Naturally, I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance. After all, the fact that we do not know what will happen does not entail that it is a matter of chance.
People also seem to believe in chance because they think things could have been differently: the die roll might have been a 1 rather than a 20 or I might have won the lottery rather than not. However, even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference. Also, there is the rather obvious question of proving that things could have been different. This would seem to be impossible: while it might be believed that conditions could be recreated perfectly, one factor that can never be duplicated – time. Recreating an event will be a recreation. If the die comes up 20 on the first roll and 1 on the second, this does not show that it could have been a 1 the first time. All its shows is that it was 20 the first time and 1 the second.
If someone had a TARDIS and could pop back in time to witness the roll again and if the time traveler saw a different outcome this time, then this might be evidence of chance. Or evidence that the time traveler changed the event.
Even traveling to a possible or parallel world would not be of help. If the TARDIS malfunctions and pops us into a world like our own right before the parallel me rolled the die and we see it come up 1 rather than 20, this just shows that he rolled a 1. It tells us nothing about whether my roll of 20 could have been a 1.
Of course, the flip side of the coin is that I can never know that the world is non-random: aside from some sort of special knowledge about the working of the universe, a random universe and a non-random universe would seem exactly the same. Whether my die roll is random or not, all I get is the result—I do not perceive either chance or determinism. However, I go with a random universe because, to be honest, I am a gamer.
If the universe is deterministic, then I am determined to do what I do. If the universe is random, then chance is a factor. However, a purely random universe would not permit actual decision-making: it would be determined by chance. In games, there is apparently the added element of choice—I chose for my character to try to attack the dragon, and then roll dice to determine the result. As such, I also add choice to my random universe.
Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not. I go with a choice universe for the following reason: If there is no choice, then I go with choice because I have no choice. So, I am determined (or chanced) to be wrong. I could not choose otherwise. If there is choice, then I am right. So, choosing choice seems the best choice. So, I believe in a random universe with choice—mainly because of gaming. So, what about the lessons from this?
One important lesson is that decisions are made in uncertainty: because of chance, the results of any choice cannot be known with certainty. In a game, I do not know if the sword strike will finish off the dragon. In life, I do not know if the investment will pay off. In general, this uncertainty can be reduced and this shows the importance of knowing the odds and the consequences: such knowledge is critical to making good decisions in a game and in life. So, know as much as you can for a better tomorrow.
Another important lesson is that things can always go wrong. Or well. In a game, there might be a 1 in 100 chance that a character will be spotted by the cultists, overpowered and sacrificed to Hastur. But it could happen. In life, there might be a 1 in a 100 chance of a doctor taking precautions catching Ebola from a patient. But it could happen. Because of this, the possibility of failure must always be considered and it is wise to take steps to minimize the chances of failure and to also minimize the consequences.
Keeping in mind the role of chance also helps a person be more understanding, sympathetic and forgiving. After all, if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance. It also helps in regards to praising success: knowing that chance plays a role in success is also important. For example, there is often the assumption that success is entirely deserved because it must be the result of hard work, virtue and so on. However, if success involves chance to a significant degree, then that should be taken into account when passing out praise and making decisions. Naturally, the role of chance in success and failure should be considered when planning and creating policies. Unfortunately, people often take the view that both success and failure are mainly a matter of choice—so the rich must deserve their riches and the poor must deserve their poverty. However, an understanding of chance would help our understanding of success and failure and would, hopefully, influence the decisions we make. There is an old saying “there, but for the grace of God, go I.” One could also say “there, but for the luck of the die, go I.”
As a gamer, philosopher and human being, I was morally outraged when I learned of the latest death threats against Anita Sarkeesian. Sarkeesian, who is well known as a moral critic of the misogynistic rot defiling gaming, was scheduled to speak at Utah State University. Emails were sent that threatened a mass shooting if her talk was not cancelled. For legal reasons, the University was not able to prevent people from being weapons to the talk, so Sarkeesian elected to cancel her talk because of concerns for the safety of the audience.
This incident is just the latest in an ongoing outpouring of threats against women involved in gaming and those who are willing to openly oppose sexism and misogyny in the gaming world (and in the real world). Sadly, this sort of behavior is not surprising and it is part of two larger problems: internet trolling and misogyny.
As a philosopher, I am in the habit of arguing for claims. However, there seems to be no need to argue that threatening women with violence, rape or death because they are opposed to misogyny in gaming and favor more inclusivity in gaming is morally wicked. It is also base cowardice in many cases: those making the threats often hide behind anonymity and spew their vile secretions from the shadows of the internet. That such people are cowards is not a shock: courage is a virtue and these are clearly people who are strangers to virtue. When they engage in such behavior on the internet, they are aptly named trolls. Gamers know the classic troll as a chaotic evil creature of great rage and little intellect, which tends to fit the internet troll reasonable well. But, the internet troll can often be a person who is not actually committed to the claims he is making. Rather, his goal is typically to goad others and get emotional responses. As such, the troll will pick his tools with a calculation to the strongest emotional impact and these tools will thus include racism, sexism and threats. There are those who go beyond mere trolling—they are the people who truly believe in the racist and sexist claims they make. They are not using misogynist and racist claims as tools—they are speaking from their rotten souls. Perhaps these creatures should be called demons rather than trolls.
While the moral right to free expression does include the saying of awful and evil things, a person should not say such things. This should not be punishable by the law (in most cases), but should be regarded as immoral actions. Matters change when threats are involved. Good sense should be used when assessing threats. After all, people Tweet and post from unthinking anger and without true intent. There are also plenty of expressions that seem to promise violence, but are also used as expressions of anger. For example, people say “I could kill you” even when they actually have no intent of doing so. However, people do make threats that have real intent behind them. While the person might not actually intend to commit the threatened act (such as murder or rape), there can be an intent to psychologically harm and harass the target and this can do real harm. When I contributed my work on fallacies to a site devoted to responding to holocaust deniers I received a few random threats. I was not too worried, but did have a feeling of cold anger when I read the emails. My ex-wife, who was a feminist philosopher, received the occasional threats and I was certainly worried for her. As such, I have some very limited understanding of what it would be like receiving threats and how this can impact a person’s life. Inflicting such a harm on an individual is wrong and legal sanctions should be taken in such cases. There is a right to express ideas, but not a right to threaten, abuse and harass. Especially in a cowardly manner from the shadows.
As might be suspected, I am in support of increasing the involvement of women in gaming and I favor removing sexism from games. My main reason for supporting more involvement of women in gaming is the same reason I would encourage anyone to game: I think it is fun and I want to share my beloved hobby with people. There is also the moral motivation: such exclusion is morally repugnant and unjustified. If there are any good arguments against women being more involved in playing and creating games, I would certainly be interested in seeing them. But, I am quite sure there are none—if there were, people would be presenting those rather than screeching hateful threats from their shadowed caves.
As far as removing sexism from video games, the argument for that is easy and obvious. Sexism is morally wrong and games that include it would thus be morally wrong. Considering the matter as a gamer and an author of tabletop RPG adventures, I would contend that the removal of sexist elements would improve games and certainly not diminish their quality. True, doing so might rob the sexists and misogynists of whatever enjoyment they get from such things, but this is not a loss that is even worthy of consideration. In this regard, it is analogous to removing racist elements from games—the racist has no moral grounds to complain that he has been wronged by the denial of his opportunity to enjoy his racism.
I do, of course, want to distinguish between sexual elements and sexism. A game can have sexual elements without being sexist—although there can be a fine line between the two. I am also quite aware that games set in sexist times might require sexist elements when recreating those times. So, for example, a WWII game that has just male generals need not be sexist (although it would be reflecting the sexism of the time). Also, games can legitimately feature sexist non-player characters, just as they can legitimately include racist characters and other sorts of evil traits. After all, villains need to be, well, villains.
Having written before on the ethics of asteroid mining, I thought I would return to this subject and address an additional moral concern, namely the potential dangers of asteroid (and comet) mining. My concern here is not with the dangers to the miners (though that is obviously a matter of concern) but with dangers to the rest of us.
While the mining of asteroids and comets is currently the stuff of science fiction, such mining is certainly possible and might even prove to be economically viable. One factor worth considering is the high cost of getting material into space from earth. Given this cost, constructing things in space using material mined in space might be cost effective. As such, we might someday see satellites built right in space from material harvested from asteroids. It is also worth considering that the cost of mining materials in space and shipping them to earth might also be low enough that space mining for this purpose would be viable. If the material is expensive to mine or has limited availability on earth, then space mining could thus be viable or even necessary.
If material mined in space is to be used on earth, the obvious problem is how to get the material to the surface safely and as cheaply as possible. One approach is to move an asteroid close to the earth to facilitate mining and transportation—it might be more efficient to move the asteroid rather than send mining vessels back and forth. One obvious moral concern about moving an asteroid close to earth is that something could go wrong and the asteroid could strike the earth, perhaps in a populated area. Another obvious concern is that the asteroid could be intentionally used as a weapon—perhaps by a state or by non-state actors (such as terrorists). An asteroid could do considerable damage and would provide a “clean kill”, that is it could do a lot of damage without radioactive fallout or chemical or biological residue. An asteroid might even “accidentally on purpose” be dropped on a target, thus allowing the attacker to claim that it was an accident (something harder to do when using actual weapons).
Given the dangers posed by moving asteroids into earth orbit, this is clearly something that would need to be carefully regulated. Of course, given humanity’s track record accidents and intentional misuse are guaranteed.
Another matter of concern is the transport of material from space to earth. The obvious approach is to ship material to the surface using some sort of vehicle, perhaps constructed in orbit from materials mined in space. Such a vehicle could be relatively simple—after all, it would not need a crew and would just have to ensure that the cargo landed in roughly the right area. Another approach would be to just drop material from orbit—perhaps by surrounding valuable materials with materials intended to ablate during the landing and with a parachute system for some basic braking.
The obvious concern is the danger posed by such transport methods. While such vehicles or rock-drops would not do the sort of damage that an asteroid would, if one crashed hard into a densely populated area (intentionally or accidentally) it could do considerable damage. While such crashes will almost certainly occur, there does seem to be a clear moral obligation to try to minimize the chances of such crashes. The obvious problem is that such safety matters would tend to increase cost and decrease convenience. For example, having the landing zones in unpopulated areas would reduce the risk of a crash into an urban area, but would involve the need to transport the materials from these areas to places where it can be processed (unless the processing plants are built in the zone). As another example, payload sizes might be limited to reduce the damage done by crashes. As a final example, the vessels or drop-rocks might be required to have safety systems, such as backup parachutes. Given that people will cut costs and corners and suffer lapses of attention, accidents are probably inevitable. But they should be made less likely by developing rational regulations. Also of concern is the fact that the vessels and drop-rocks could be used as weapons (as a rule, any technology that can be used to kill people will be used to kill people). As such, there will need to be safeguards against this. It would, for example, be rather bad if terrorist were able to get control of the drop system and start dropping vessels or drop-rocks onto a city.
Despite the risks, if there is profit to be made in mining space, it will almost certainly be done. Given that the resources on earth are clearly limited, access to the bounty of the solar system could be good for (almost) everyone. It could also be another step form humanity away from earth and towards the stars.
Newcomb’s Paradox was created by William Newcomb of the University of California’s Lawrence Livermore Laboratory. The dread philosopher Robert Nozick published a paper on it in 1969 and it was popularized in Martin Gardner’s 1972 Scientific American column.
In this essay I will present the game that creates the paradox and then discuss a specific aspect of Nozick’s version, namely his stipulation regarding the effect of how the player of the game actually decides.
The paradox involves a game controlled by the Predictor, a being that is supposed to be masterful at predictions. Like many entities with but one ominous name, the Predictor’s predictive capabilities vary with each telling of the tale. The specific range is from having an exceptional chance of success to being infallible. The basis of the Predictor’s power also vary. In the science-fiction variants, it can be a psychic, a super alien, or a brain scanning machine. In the fantasy versions, the Predictor is a supernatural entity, such as a deity. In Nozick’s telling of the tale, the predictions are “almost certainly” correct and he stipulates that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
Once the player confronts the Predictor, the game is played as follows. The Predictor points to two boxes. Box A is clear and contains $1,000. Box B is opaque. The player has two options: just take box B or take both boxes. The Predictor then explains to the player the rules of its game: the Predictor has already predicted what the player will do. If the Predictor has predicted that the player will take just B, B will contain $1,000,000. Of course, this should probably be adjusted for inflation from the original paper. If the Predictor has predicted that the player will take both boxes, box B will be empty, so the player only gets $1,000. In Nozick’s version, if the player chooses randomly, then box B will be empty. The Predictor does not inform the player of its prediction, but box B is either empty or stuffed with cash before the players actually picks. The game begins and ends when the player makers her choice.
This paradox is regarded as a paradox because the two stock solutions are in conflict. The first stock solution is that the best choice is to take both boxes. If the Predictor has predicted the player will take both boxes, the player gets $1,000. If the Predicator has predicted (wrongly) that the player will take B, she gets $1,001,000. If the player takes just B, then she risks getting $0 (assuming the Predicator predicted wrong).
The second stock solution is that the best choice is to take B. Given the assumption that the Predictor is either infallible or almost certainly right, then if the player decides to take both boxes, she will get $1,000. If the player elects to take just B, then she will get $1,000,000. Since $1,000,000 is more than $1,000, the rational choice is to take B. Now that the paradox has been presented, I can turn to Nozick’s condition that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.
This stipulation provides some insight into how the Predictor’s prediction ability is supposed to work. This is important because the workings of the Predictor’s ability to predict are, as I argued in my previous essay, rather significant in sorting out how one should decide.
The stipulation mainly serves to indicate how the Predicator’s ability does not work. First, it would seem to indicate that the Predictor does not rely on time travel—that is, it does not go forward in time to observe the decision and then travel back to place (or not place) the money in the box. After all, the prediction in this case would be explained in terms of what the player decided to do. This still leaves it open for the Predictor to visit (or observe) a possible future (or, more accurately, a possible world that is running ahead of the actual world in its time) since the possible future does not reveal what the player actually decides, just what she decides in that possible future. Second, this would seem to indicate that the Predictor is not able to “see” the actual future (perhaps by being able to perceive all of time “at once” rather than linearly as humans do). After all, in this case it would be predicting based on what the player actually decided. Third, this would also rule out any form of backwards causation in which the actual choice was the cause of the prediction. While there are, perhaps, other specific possibilities that are also eliminated, the gist is that the Predictor has to, by Nozick’s stipulation, be limited to information available at the time of the prediction and not information from the future. There are a multitude of possibilities here.
One possibility is that the Predictor is telepathic and can predict based on what it reads regarding the player’s intentions at the time of the prediction. In this case, the best approach would be for the player to think that she will take one box, and then after the prediction is made, take both. Or, alternatively, use some sort of drugs or technology to “trick” the Predictor. The success of this strategy would depend on how well the player can fool the Predictor. If the Predictor cannot be fooled or is unlikely to be fooled then the smart strategy would be to intend to take box B and then just take box B. After all, if the Predictor cannot be fooled, then box B will be empty if the player intends on taking both.
Another possibility is that the Predictor is a researcher—it gathers as much information as it can about the player and makes a shrewd guess based on that information (which might include what the player has written about the paradox). Since Nozick stipulates that the Predictor is “almost certainly” right, the Predictor would need to be an amazing researcher. In this case, the player’s only way to mislead the Predictor is to determine its research methods and try to “game” it so the Predictor will predict that she will just take B, then actually decide to take both. But, once again, the Predictor is stipulated to be “almost certainly” right—so it would seem that the player should just take B. If B is empty, then the Predictor got it wrong, which would “almost certainly” not happen. Of course, it could be contended that since the player does not know how the Predictor will predict based on its research (the player might not know what she will do), then the player should take both. This, of course, assumes that the Predictor has a reasonable chance of being wrong—contrary to the stipulation.
A third possibility is that the Predictor predicts in virtue of its understanding of what it takes to be a determinist system. Alternatively, the system might be a random system, but one that has probabilities. In either case, the Predictor uses the data available to it at the time and then “does the math” to predict what the player will decide.
If the world really is deterministic, then the Predictor could be wrong if it is determined to make an error in its “math.” So, the player would need to predict how likely this is and then act accordingly. But, of course, the player will simply act as she is determined to act. If the world is probabilistic, then the player would need to estimate the probability that the Predictor will get it right. But, it is stipulated that the Predictor is “almost certainly” right so any strategy used by the player to get one over on the Predictor will “almost certainly” fail, so the player should take box B. Of course, the player will do what “the dice say” and the choice is not a “true” choice.
If the world is one with some sort of metaphysical free will that is in principle unpredictable, then the player’s actual choice would, in principle, be unpredictable. But, of course, this directly violates the stipulation that the Predictor is “almost certainly” right. If the player’s choice is truly unpredictable, then the Predictor might make a shrewd/educated guess, but it would not be “almost certainly” right. In that case, the player could make a rational case for taking both—based on the estimate of how likely it is that the Predictor got it wrong. But this would be a different game, one in which the Predictor is not “almost certainly” right.
This discussion seems to nicely show that the stipulation that “what you actually decide to do is not part of the explanation of why he made the prediction he made” is a red herring. Given the stipulation that the Predictor is “almost certainly” right, it does not really matter how its predictions are explained. The stipulation that what the player actually decides is not part of the explanation simply serves to mislead by creating the false impression that there is a way to “beat” the Predictor by actually deciding to take both boxes and gambling that it has predicted the player will just take B. As such, the paradox seems to be dissolved—it is the result of some people being misled by one stipulation and not realizing that the stipulation that the Predictor is “almost certainly” right makes the other irrelevant.