Stochastic Bookmark

abstruse unfinished commentary

about correspondence

3.9.07

Daniel Dennett exceeds his competence ...

... in a Technology Review piece on chess & AI, performing a reductio ad absurdam on his own argument, which belies a cluelessness at several levels, not least in provocateurship. As with much polemic, it is seeded with some reasonable observations but is distorted by tendentiousness and weakness of argument buttressed by spleenful rhetoric. It would deserve a thorough fisking, were it to rise to that level; but the interesting questions that could be raised and investigated, such as What is cognition? what is thought? what is intelligence? what is consciousness? and how do they depend upon one another? are not attributes displayed by Dennett here, much less broached. So I'll do so only in part. I don't have any brief against Dennett or his views generally; I just think that he here gives more comfort to the other side of the question. While he somewhat justifiably claims that critics have moved the goalposts, he then proceeds to do so himself, more radically, complaining all the while, and in the process scoring an own goal.

"Chess requires brilliant thinking, supposedly the one feat that would be--forever--beyond the reach of any computer. But for a decade, human beings have had to live with the fact that one of our species' most celebrated intellectual summits--the title of world chess champion--has to be shared with a machine, Deep Blue, which beat Garry Kasparov in a highly publicized match in 1997."
Error of fact: The title of world chess champion was never at stake in an exhibition match. And the premise that chess requires brilliant thinking rather than massive computation would be just what Deep Blue's accomplishment refutes. (Oh, but we're in the popular imagination. So massive computation is brilliant thinking.)

"The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don't know when to accept a draw."
Um, no. The best computer chess makes moves inexplicible to a human, moves which may well be objectively best but which come from no formulation of a strategic plan. The best human chess paradoxically often arises from recovery from earlier less-than-best moves (e.g., deceiving the opponent along a false trail). The bit about not knowing when to accept a draw is specious, since it is a matter of convenience in play versus humans.

"When Deep Blue beat Kasparov in 1997, many commentators were tempted to insist that its brute-force search methods were entirely unlike the exploratory processes that Kasparov used when he conjured up his chess moves. But that is simply not so. Kasparov's brain is made of organic materials and has an architecture notably unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine that has an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches."
Seizing upon the entirely unlike, this commentator is tempted to insist that the process is exactly the same, except for the differences? This is pruning an argument rather close to topiary. What matters is not whether the heuristic is explicit or implicit, what matters is whether the heuristic is imposed or self-generated.

"The fact is that the search space for chess is too big for even Deep Blue to explore exhaustively in real time, so like Kasparov, it prunes its search trees by taking calculated risks, and like Kasparov, it often gets these risks precalculated. Both the man and the computer presumably do massive amounts of "brute force" computation on their very different architectures. After all, what do neurons know about chess? Any work they do must use brute force of one sort or another."
The fact is that computers exhaust a large fraction of the search space locally, while the human chessplayer's subconscious calculation is highly inexact (though refined by pattern recognition honed by experience) and of course the conscious calculation only traverses a sliver of the search space, and even that not fully. There is a difference (and interplay) between filtration and selection. But define "brute force" broadly enough, and everything is merely a matter of computation -- welcome back to the clockwork universe.

"It may seem that I am begging the question by describing the work done by Kasparov's brain in this way, but the work has to be done somehow, and no way of getting it done other than this computational approach has ever been articulated. It won't do to say that Kasparov uses "insight" or "intuition," since that just means that ­Kasparov himself has no understanding of how the good results come to him. So since nobody knows how Kasparov's brain does it--least of all Kasparov himself--there is not yet any evidence at all that Kasparov's means are so very unlike the means exploited by Deep Blue."
Yes, begging the question. No, there is abundant evidence that Kasparov's and Deep Blue's means differ substantively, even though we can specify little of the former. Drawing analogies between mental and computational processes can help elucidate the former, but the ability to draw analogies is one distinguishing characteristic of intelligence, however defined. Add a meaningless pawn to a chess position, and the human recognizes the similarity while the computer relaunches a full analysis. (Add a meaningful pawn and the human may well be misled.)

But Dennett proves incapable of analogizing, relying instead on the brute force of rhetoric. The whole silicon/protein dichotomy is a red herring: The difference between emergent behaviors of designed algorithms versus adaptive systems is what's in play. (Should some cellular automata be considered more intelligent because they give rise to more complex behavior? Or simpler?) The latter may or may not reduce into a some algorithmic architecture, and might usefully be modelled by such. But there is no evidence that they must be algorithmic, much less designed.

(Older speculations around this topic, in fact and fiction.)

Addendum: On second thought, I'll save you a click on the first link:
Early attempts to replicate human modes of analysis were (and still are) an abject failure. The way forward was to make the best use of what computers do best, iteration and computation. (And a third component, memory, as applied to opening play and checking endgames against known results.) Iteration involved building a tree of all possible moves from a given position, defining a search space, but the combinatorial explosion in potential moves limits how far out this can go. The computational aspect is in evaluating the resulting positions in terms of balance (not just material, but control of space and other tactical and positional factors) and stability (to determine whether to evaluate positions further down the tree; for a simple instance, if there are checks available to either side). Optimizing the interaction between iteration and computation is what lends strength to the result; it also governs strategies for human play against computers, in which long-term strategic considerations, beyond the horizon that such iterated computation can detect, become the tactic.
To which I'll add: Consider chess problems of the "mate in n moves" variety. Computers have outperformed humans in solving these for decades; computers exhaust the search space as a first resort, humans as a last resort. Is this then the same process?

5 Comments:

Anonymous Anonymous said...

Sigh. He really gives computer science a bad name. He's still a functionalist after all these years, locked in his old argument with Searle, trying to rebut the Chinese room (not that there aren't problems with that argument, it's just that he's engaging with the wrong argument).

It's even more frustrating when people on the other side say, "Oh, it's not algorithmic AT ALL," then doing some hand-waving to allude to some magical feats of cognition that cannot be replicated computationally. (The extreme is Hubert Dreyfus's position that computers are *ontologically* incapable of thinking as humans, as they are not Heideggerian actors in the world.)

Anyway, it all reads like extremist rhetoric to me. Like you say, the area to explore is to elucidate the gaps between the two: why it is that some problems (chess, lexical searching) are comparatively easy for a computer to solve, while others (speech recognition, vision, semantic analysis) are very hard. Figuring out how we do those things with relative ease will be the most productive avenue.

4/9/07 12:35  
Anonymous Anonymous said...

Searle's "chinese room" argument seems quite sound really; though it's refuttable. That's Searle's problem--he seems to suggest it isn't refuttable.

Chess strategy most likely involves some holistic type of thinking beyond mere computation (or maybe it's a higher type of computation). It's not a memory issue with Deep Blue, right? There's some type of awareness that a Kasparov has for the positions, which is not merely calculation of moves ahead--though it could be calculation of moves ahead (though Kasparov level player would presumably know what to do "out of book" as well). On my chess engine (Fritz, and still rather powerful--I doubt many hacks could beat it at full expert strength), pop-ups appear when someone plays odd openings, or plays "out of book" (since the comp. can check moves against its database of openings and games) Not sure of the exact moves against DB, but one wonders whether Kasparov could have played some odd variations that were not in book. I bet if he plays standard king-side or queen-gambit type stuff, he loses. I wager in a few years computers will defeat grandmasters regularly.

9/10/07 12:38  
Blogger nnyhav said...

It's not just book. Computers already play at grandmaster level. But even a piddling submaster can beat the computers.

9/10/07 16:24  
Anonymous Anonymous said...

OK, but look at example 3 from your link. That is what I am saying: there is some strategic or holistic problem with some of the programs (and one notes that in other notated computer games). Obvious mate--and it's black's move! So WTF?

A program might perform super calculations for moves ahead, but can't "see" the position or something: it's playing tactics-- not strategy, probably. Tho' I think they admit that was a programmer error.

Even Chess Extreme CD off the shelf at advanced level will sometimes glitch and miss an obvious move. Whatevs. I suspect once all the glitches are gone, and the databases perfected, along with some strategy or position functions (which I think deep blue had, maybe), humans will lose all the time.....

9/10/07 19:14  
Anonymous Anonymous said...

Yeah Nemeth: that is what I am saying: just dick with the opening, make 1 pawn moves, etc. Set up some strange position, and the program doesn't know what to do. So what? Doesn't mean much. But that can probably be accounted for (and may have been). In other words, Nemeth tweaks the program and gets a trick win: he doesn't really defeat it. If he plays a standard opening, I bet he loses (unless the program is f-ed, as in example 3).

You sound nearly platonic at times, Herr SB.

9/10/07 22:23  

Post a Comment

<< Home