Oxblog's Josh Chafetz dismisses the significance of the latest match between chess champion Garry Kasparov and a computer. "Chess is a fully self-contained world, with a fairly simple set of rules," he writes. "The day is still, I think, a long way off that computers will be able competently to navigate the real world, because the real world does not have a set of easily understandable rules."
Actually, computers are perfectly capable of navigating the real world competently. I've never seen one trip and fall, for example, or injure itself accidentally. Computers are rather passive, sedentary creatures, of course--they don't tend to move, being content to let others place them where they please. But does that make them incompetent?
The answer, of course, is that it depends on the rules of the game. There is currently a large number of well-defined "games", such as go, face recognition, or basketball, at which humans are still much better than machines. But we can easily conceive of the day, however far off, when we might lose to the world's best robot/computer at just about any such well-defined game, including these three. Our natural reaction is therefore to come up with games (such as "navigating the real world", "appreciating a rose or a symphony", or "pondering the meaning of life") for which the rules are sufficiently ill-defined that we can smugly declare ourselves the winners. The point of these nebulous "games", in truth, is that to "win" at them is simply to be human, and since a computer will never be a human, we thus win by default.
The fallacy here is not underestimation of the potential of technology, but rather overestimation of the indefinability of human nature. We assume that because we have virtually no understanding of the "rules" by which we play our various human games, the rules are therefore undefinable. In fact, we may be far more well-defined than we realize, behaving according to a set of rules that are encoded in our genes and expressed in the structure of our nervous systems, and can in principle be programmed just as easily into an appropriately designed computer.
Of course, the games we play are not necessarily the ones we would want a computer to play. Do we really want to build computers that--following the rules of our own "game"--overeat, stupefy themselves with mood-altering substances, slack off at their jobs, get into petty fights with each other, and routinely make careless errors? Obviously not; why on earth would we build a machine to do what we do so naturally, all by ourselves?
No, we would inevitably choose to build our computers to do what we can't do, such as work tirelessly and without error at some immensely taxing yet deadly dull cognitive task--like, say, generating and evaluating millions of chess moves. And that is, in fact, precisely what the designers of Garry Kasparov's computer opponent have done. They have not, it is true, created a human. But if that had been their goal, then why would they have bothered to tinker with computers at all, when the old-fashioned way is surely faster, cheaper and more fun?