Notes from the underground
barang dot sg
Updated 2 December 2023
Newcomb’s problem

1. The problem
2. Why you should take just box B
3. Why you should take both boxes
4. A paradox?
5. An old principle
6. A simpler version
7. The birthday gifts
8. Compatibilism
9. The modified predictor
10. References

Other Sites

Thinking inside the boxes: Newcomb’s problem still flummoxes the great philosophers (2002), by Jim Holt.
www.slate.com

Newcomb’s paradox: an argument for irrationality (2010), by Julia Galef.
rationallyspeaking.blogspot.com

Theology throwdown: Newcomb’s paradox (2010), by Jethro Flench.
www.thesmogblog.com

Newcomb’s problem divides philosophers. Which side are you on? (2016), by Alex Bellos.
www.theguardian.com
5. An old principle

The principle of maximizing expected utility has a pretty distinguished history where the art of making a decision is concerned, though not an entirely unchequered one.

It dates to the 17th century, foreshadowed by such advice as this from the Port Royal Logic:
... to judge what one ought to do to obtain a good or avoid an evil, one must not only consider the good and evil in itself, but also the probability that it will or will not happen and view geometrically the proportion that all these things have together.
Such advice is little more than common sense but the distinction of the principle is to enshrine the idea mathematically, as illustrated by our previous calculations for taking just box B.

As revered as the principle is, however, there have always been a few troublesome “decision problems” which have mocked the principle or otherwise called it into question. The most famous of these is perhaps the St. Petersburg paradox, which is as old as the principle itself, while a more recent example is the two-envelope paradox, which has a notorious tendency to stump those used to wielding the principle.

In each of these cases, a situation is described where a decision must be made but the principle appears to recommend a decision that seems to be obviously wrong or even silly. The challenge is to explain why the principle is not thereby refuted.

We can’t pause to examine these paradoxes here (though see the references) but notice that Newcomb’s problem raises a similar challenge. Indeed, some indeed regard it as the most serious challenge to the principle since its conception. Thus, while the principle is widely felt to survive the two challenges mentioned above, not to mention lesser-known ones, Newcomb’s problem has persuaded many that the principle is really mistaken, since it recommends the “wrong” decision to take just box B.

Of course, only an uncompromising two-boxer could say this, but there are many such people and their common refrain is that a decision (to act one way rather than another) should always be based on what these acts would cause to happen, which appears to be completely overlooked by the 17th century principle.

In particular, the following sorts of conditional probabilities are held to blame:

Prob (Box B contains a million | You take just box B) = 0.8
Prob (Box B contains nothing | You take just box B) = 0.2
Prob (Box B contains nothing | You take both boxes) = 0.8
Prob (Box B contains a million | You take both boxes) = 0.2

since the principle accords them such a central role in the decision process despite their being blind to the fact that taking box B, for example, does not cause (or even tend to cause) the million dollars to appear in it. (Correlation is not causation.)

All the same, the principle is not abandoned completely, since two-boxers agree that the notion of a “weighted balance” of our options should be preserved. The point is only that the weights should not be the causally-blind conditional probabilities of the Port Royal Logic.

Proponents of such a causal decision theory, as it has come to be known, are still working out the best way to salvage the traditional principle. In a nutshell, they want a revised decision principle which preserves the acknowledged virtues of the traditional one but which proves also to recommend taking both boxes in Newcomb’s problem.

This sketch is rough but useful to mention because such thinking has dominated the past twenty years of discussion of Newcomb’s problem. Indeed, it underlies the textbook-wise orthodox opinion that taking both boxes is the correct decision in Newcomb’s problem.

There is an obvious danger in thinking this way of course since it is tantamount to making one’s theory go where one wants it to go. Is it really that “obvious” that one should take both boxes in Newcomb’s problem?

My opinion is that taking both boxes is a mistake and I wish now to explain (what I think is) the precise error in the “convincing” case for taking both boxes previously displayed.