... in a Technology Review piece
on chess & AI, performing a reductio ad absurdam
on his own argument, which belies a cluelessness at several levels, not least in provocateurship. As with much polemic, it is seeded with some reasonable observations but is distorted by tendentiousness and weakness of argument buttressed by spleenful rhetoric. It would deserve a thorough fisking, were it to rise to that level; but the interesting questions that could be raised and investigated, such as What is cognition? what is thought? what is intelligence? what is consciousness? and how do they depend upon one another?
are not attributes displayed by Dennett here, much less broached. So I'll do so only in part. I don't have any brief against Dennett or his views generally; I just think that he here gives more comfort to the other side of the question. While he somewhat justifiably claims that critics have moved the goalposts, he then proceeds to do so himself, more radically, complaining all the while, and in the process scoring an own goal. "Chess requires brilliant thinking, supposedly the one feat that would be--forever--beyond the reach of any computer. But for a decade, human beings have had to live with the fact that one of our species' most celebrated intellectual summits--the title of world chess champion--has to be shared with a machine, Deep Blue, which beat Garry Kasparov in a highly publicized match in 1997."
Error of fact: The title of world chess champion was never at stake in an exhibition match. And the premise that chess requires brilliant thinking rather than massive computation would be just what Deep Blue's accomplishment refutes. (Oh, but we're in the popular imagination. So massive computation is
brilliant thinking.)"The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don't know when to accept a draw."
Um, no. The best computer chess makes moves inexplicible to a human, moves which may well be objectively best but which come from no formulation of a strategic plan. The best human chess paradoxically often arises from recovery from earlier less-than-best moves (e.g., deceiving the opponent along a false trail). The bit about not knowing when to accept a draw is specious, since it is a matter of convenience in play versus humans."When Deep Blue beat Kasparov in 1997, many commentators were tempted to insist that its brute-force search methods were
entirely unlike the exploratory processes that Kasparov used when he conjured up his chess moves. But that is simply not so. Kasparov's brain is made of organic materials and has an architecture notably unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine that has an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches."
Seizing upon the entirely
unlike, this commentator is tempted to insist that the process is exactly
the same, except for the differences? This is pruning an argument rather close to topiary. What matters is not whether the heuristic is explicit or implicit, what matters is whether the heuristic is imposed or self-generated."The fact is that the search space for chess is too big for even Deep Blue to explore exhaustively in real time, so like Kasparov, it prunes its search trees by taking calculated risks, and like Kasparov, it often gets these risks precalculated. Both the man and the computer presumably do massive amounts of "brute force" computation on their very different architectures. After all, what do neurons know about chess? Any work they do must use brute force of one sort or another."
The fact is that computers exhaust a large fraction of the search space locally, while the human chessplayer's subconscious calculation is highly inexact (though refined by pattern recognition honed by experience) and of course the conscious calculation only traverses a sliver of the search space, and even that not fully. There is a difference (and interplay) between filtration and selection. But define "brute force" broadly enough, and everything is merely a matter of computation -- welcome back to the clockwork universe. "It may seem that I am begging the question by describing the work done by Kasparov's brain in this way, but the work has to be done somehow, and no way of getting it done other than this computational approach has ever been articulated. It won't do to say that Kasparov uses "insight" or "intuition," since that just means that Kasparov himself has no understanding of how the good results come to him. So since nobody knows how Kasparov's brain does it--least of all Kasparov himself--there is not yet any evidence at all that Kasparov's means are so very unlike the means exploited by Deep Blue."
Yes, begging the question. No, there is abundant evidence that Kasparov's and Deep Blue's means differ substantively, even though we can specify little of the former. Drawing analogies between mental and computational processes can help elucidate the former, but the ability to draw analogies is one distinguishing characteristic of intelligence, however defined. Add a meaningless pawn to a chess position, and the human recognizes the similarity while the computer relaunches a full analysis. (Add a meaningful pawn and the human may well be misled.)
But Dennett proves incapable of analogizing, relying instead on the brute force of rhetoric. The whole silicon/protein dichotomy is a red herring: The difference between emergent behaviors of designed algorithms versus adaptive systems is what's in play. (Should some cellular automata be considered more intelligent because they give rise to more complex behavior? Or simpler?) The latter may or may not reduce into a some algorithmic architecture, and might usefully be modelled by such. But there is no evidence that they must be algorithmic, much less designed.
(Older speculations around this topic, in fact
: On second thought, I'll save you a click on the first link:
Early attempts to replicate human modes of analysis were (and still are) an abject failure. The way forward was to make the best use of what computers do best, iteration and computation. (And a third component, memory, as applied to opening play and checking endgames against known results.) Iteration involved building a tree of all possible moves from a given position, defining a search space, but the combinatorial explosion in potential moves limits how far out this can go. The computational aspect is in evaluating the resulting positions in terms of balance (not just material, but control of space and other tactical and positional factors) and stability (to determine whether to evaluate positions further down the tree; for a simple instance, if there are checks available to either side). Optimizing the interaction between iteration and computation is what lends strength to the result; it also governs strategies for human play against computers, in which long-term strategic considerations, beyond the horizon that such iterated computation can detect, become the tactic.
To which I'll add: Consider chess problems of the "mate in n moves" variety. Computers have outperformed humans in solving these for decades; computers exhaust the search space as a first resort, humans as a last resort. Is this then the same process?