This was written in 2009. It explains the notion of a logical paradox in a way that comes very naturally to me. The conclusion drawn at the end is probably the most important thing for me (“We are blinder than we can imagine in matters of straightforward logical reasoning”) but I find that it is rarely emphasized by others.
9. Newcomb’s paradox
Newcomb’s paradox may take various forms, but we only need a simple version here.
Imagine that a sealed box is placed before you, containing either a thousand dollars or a million, but you don’t know which. The money in the box is yours to take (free money, no catch), but you are required to decide whether you’d like to say Abracadabra before opening the box and taking the money.
Needless to say, saying Abracadabra would normally just be a childish thing to do, but the trouble is that the man responsible for the contents of the box has made a prediction as to whether you would nevertheless say it. If he predicted that you would, then he has left the million dollars in the box, whereas if he predicted that you wouldn’t, then he has left only the thousand dollars in it.
His prediction was made yesterday and the contents of the box have already been settled, and cannot now change. You are not told what his prediction was, and so you don’t know if the box contains the thousand or the million. However, you have good reason to believe that he has correctly predicted whether you would say Abracadabra. (For example, he is an astute psychologist who is very good at predicting what people will do, with an impressive record of correct predictions to date.)
If your sole concern is to get as much money as you can, should you say Abracadabra before opening the box?
Some people think that you should. For if you do, the man is likely to have predicted this and have left the million dollars in the box, whereas if you don’t, he is likely to have predicted that and have left only the thousand dollars in it.
After all, he is an astute psychologist who is very good at predicting what people will do, with an impressive record of correct predictions to date. So saying Abracadabra before opening the box is likely to make you a millionaire!
In contrast, other people consider this to be absurd.
After all, whatever the track record of this predictor, the contents of the box were settled yesterday and cannot now change. Saying Abracadabra, in particular, will certainly not cause them to change! Clearly, the contents of the box will remain what they are, whether you say Abracadabra, beat your hairy chest, or do anything else of this sort.
So you cannot possibly gain anything by saying Abracadabra. You should just open the box and take whatever it contains: a thousand dollars or a million, as the case may be. Saying Abracadabra in the hope of finding more money in the box is just plain silly!
If you think about this for a while, then, as tempting as the first argument may be, the second argument will soon come to seem irrefutable. It’s not that the first argument has no force. Rather, its force seems to be completely overshadowed by the force of the second argument. For this reason, many decline to call this a paradox, and politely refer to it as “Newcomb’s problem” instead.
Even so, there is no consensus as to which argument is really the correct one. As hinted, the majority of people who have considered it seem eventually to endorse the second argument, and conclude that there is no point in saying Abracadabra, but a significant number of people continue to be disturbed by the apparent force of the first argument.
From our point of view, the second argument is indeed the interesting one, since, while it seems to be “completely flawless,” we cannot help but ask if this is just another case of a subtle error of logic going undetected. It’s a real test this time, because there is no “crazy conclusion” to help us out and we’re pretty much on our own.
Unfortunately, we cannot try to resolve this conundrum here, but notice that, while our example concerned a far-fetched offer of free money, the problem can easily take a more sobering form.
Suppose that your doctor recommends a certain DNA test to see if you have a certain defective gene.
The gene is responsible for a dreaded mental illness in later age, but if the symptoms haven’t surfaced yet, the condition has a chance of being treated. Indeed, many who take the test prove to have the gene, so the gene seems to be rather prevalent, and the test is always recommended.
But suppose that, unknown to medical science, there is a most peculiar correlation between deciding to take the test and proving to have the gene. (An unsuspected law of nature.) Thus, something in the order of things makes it likely that, if you decide to take the test, then, amazingly, you will prove to have the gene; and vice versa, should you decline to take the test. (This is why many who take the test prove to have the gene.)
This may seem fanciful, since whether you have the gene was determined at birth (or at the moment of conception), so how can deciding to take the test possibly influence whether you have the gene?
But the situation is not essentially different from the case above, where if you decide to say Abracadabra, then, in all likelihood, a million dollars will prove yesterday to have been placed in the box; and vice versa, should you decline to say it. The only difference is that we have replaced the human predictor, responsible for placing money in the box, with an impersonal law of nature, responsible for laying down the gene.
You have no idea whether this law of nature really exists, but no one can rule it out either. Should you take the test?
Your doctor, who has never heard of Newcomb’s paradox, reasons calmly that, since your genes were determined at birth, taking the test cannot possibly cause you to have the gene if you do not already have it. So there can be no reason to fear the test! (Same as arguing that there is no point in saying Abracadabra.)
As before, this argument looks “completely convincing,” but the question is whether it’s really correct.
By now, we should have seen enough to know that, when it comes to a logical paradox, the simplest question may be the hardest one.