Mark Kleiman has brought to our attention a discussion in Brad DeLong's blog of the well-known Newcomb's problem. The problem is described as follows: a hyperintelligent extraterrestrial alien who can model and predict individual human behavior with perfect (or near-perfect) accuracy offers you a choice between A) the contents of a locked, opaque box, and B) the same contents plus $10. The catch is that the alien has already predicted which you will choose, and previously placed $1,000,000 in the box if he/she/it has predicted you would choose (A), and nothing otherwise. Which do you choose?
The problem is actually a little misleading; it's cast as a decision theoretic puzzle, but it's really just an illustration of the incompatibility of the idea of free will with the idea of a deterministic universe. Viewed as a decision problem, it produces an odd result: even though choice (B) is in every case strictly better than choice (A) (by exactly $10, in fact), it's not obvious that it's preferable. The real issue becomes clearer, though, if the problem is simply changed slightly: let the box be transparent. Now you can see exactly what the alien predicted, and you have no incentive to pass up the extra $10. Or do you?
Well, it depends on what you are, really. If you're possessed of free will, and can truly decide on the spur of the moment to take the $10, then you obviously would, if you see the box full of cash--precisely because the alien wouldn't be able to predict such an action in advance. (After all, if you see the box is empty, you can always decide to decline the $10, just to prove the alien fallible. That's what free will means, right?)
If, on the other hand, you're a deterministic algorithm forced by the laws of nature to make a predetermined choice, then that choice could as easily be to forgo the $10 as to grab it--but in that case, there's no point asking "what would you do?", except as a purely empirical question (i.e., "what would you have no choice but to do?"). You can believe yourself to be one or the other--but Newcomb's problem asks you to believe both at the same time, and is therefore fundamentally self-contradictory.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment