Code: Select all
Search(int depth)
{
hash = &hashTable[key];
if(hash->signature == key) { // hit
if(hash->depth & 128) return 0; // '128' bit flags repetition
... // other hash-hit stuff (check bounds and move)
hash->depth |= 128; // mark entry as being searched
}
...
for(iterDepth=startDepth; iterDepth<=depth, iterDepth++) { // IID loop
...
hash->signature = key;
hash->depth = iterDepth | 128; // hash store
}
hash->depth &= ~128; // unmark when leaving node
return bestScore;
}
All simple enough, and (except perhaps speedwise) I don't see any reason why this should not work exactly the same as the more conventional looping over a stack of visited hash keys to detect a repetition.
It seems, however, that the version that does this is seriously crushed by the old one (some 100 Elo weaker?) When I remove the setting of the 'busy' flag in the initial probe this disappears, but I have the feeling that this is merely because I render the mechanism almost completely ineffective: in stable regions of the tree each node would already have been searched at a one-lower depth because of the iterative deepening in the root, and the IID loop would in fact not be a loop at all, but just executed once for the final depth, as all preceding iterations would have been satisfied from the hash probe. So you would mark the node as 'busy' only when you are done with it, and clear the flag immediately afterwards.
It is hard for me to believe that checking for repetitions of tree positions would hurt you this much. But I see it also in time-to-depth of simple end-game positions (where hashing has the biggest impact), like KPK. The original version shoots up to depth 23, and in the same time the one that checks repetitions can reach only depth 12 with about as many nodes. It is as if the repetition checking completely wrecks the hash table.
But this cannot be right, right? I don't actually mess with the score in the entry for the repeated nodes, I just hide a flag in their depth which is later removed. This should be fully equivalent to testing the repetition from a list of hash keys, and returning a 0 score based on that, without touching the hash entry. The only hash entries that could be affected are these of the parent node, when their natural score would be negative, and the possibility to repeat is offered due to a preceding clumsy manouevre of the winning side. But that would be exactly the same in the list-of-keys method.
So this really puzzles me. I cannot find a fault in the implementation, and the whole thing is so simple that one would not expect one. Is it essential to not store scores of nodes that found a move to a repeat in the hash table? I heard of people suppressing storing of 0 scores, which always struck me as a nasty kludge, because 0 scores could arise from the normal evaluation as well. And could this really be worth 100 Elo?