4. A paradox?
This is a really good problem in my opinion and I certainly don’t mind calling it a paradox.
There’s an utterly convincing argument for taking just box B as well as an utterly convincing argument for taking both boxes. Something must give but it’s not clear what.
The paradox may also be captured by considering the following statement:
You’ll get more money if you take just box B than if you take both boxes.
Is this true or false?
One-boxers will say true
because, given the predictor’s prowess, you should expect to get a million dollars if you take just box B but only a thousand if you take both boxes.
But two-boxers will say false
because, given that the contents of box B are fixed, you’ll obviously get a thousand dollars more if you take both boxes than if you take just box B.
As before, both sides seem to be right.
Another way of raising the paradox is this. Everyone should agree that the following statement is true:
There’s more money in both boxes than in box B alone.
From this, it seems to follow that, if your sole aim is money, you should take both boxes.
But suppose now that you get to “play the game” many times into the long run. Should you two-box each time? Well, the statement above will be true each
time you play the game, so, yes, you should two-box each time.
But if the predictor’s prowess persists into the long run, as we may assume, a consistent one-boxer will clearly end up richer than a consistent two-boxer. (The previous expected-utility calculations demonstrate this.)
And so it seems that you should one-box each time.
This is the “long-run paradox”, so to speak. Both sides claim to be doing the thing that gets them more money and both sides seem to have a decisive point in their favour.
Faced with this Gordian knot, some people are driven to conclude that, actually, both sides are right and that what the paradox really shows is that no
such predictor as described could possibly exist! (Or, more carefully, you could have no good reason to believe that any such predictor exists.)
On this view, Newcomb’s predictor is like Bertrand Russell’s barber, who supposedly shaved all and only the inhabitants of his village who did not shave themselves. But there can be no such barber, because if there was, he would be compelled both to shave and not to shave himself, which would be a contradiction.
Likewise, it is claimed, Newcomb’s predictor cannot exist, because if he did, it would be correct to take both boxes, as well as to take just box B, which would be a contradiction.
Unfortunately, this position is not very popular because it seems a little desperate. We know that human beings are often predictable and it is no stretch of the imagination to suppose that someone could know you well enough to predict your choice in some situation with a fair degree of accuracy.
Indeed, the expected-utility calculations show that Newcomb’s paradox will arise if the predictor is just 51% accurate! So our 80% was actually very generous.
As such, most people admit the possibility of Newcomb’s predictor and maintain that either the case for one-boxing or two-boxing must contain a flaw; the only question being what
When this problem was first discussed in the 1970s, the dust was very much in the air and all of one-boxers, two-boxers and predictor-rejectors could be found in the literature. By the 1980s, the predictor-rejectors had mostly disappeared and many two-boxers were starting to speak up although some one-boxing voices could still be heard.
By the 1990s, however, the dust seemed to have settled and the orthodox opinion was that two-boxing was probably correct and so the case for one-boxing must (therefore) somehow be flawed. In particular, the principle of maximizing expected utility was seriously being called into question, at least in its traditional form.
It is worth considering this briefly before proceeding.