Page 1 of 1

Interesting hash-replacement scheme

Posted: Sat Apr 08, 2017 1:07 pm
by hgm
When we don't want to bother with aligning hash buckets with cache lines (which could be space wasting if an integer number of entries does not exactly fit the cache line, and make the code a bit cumbersome), the following replacement scheme seems a nice alterative:

A position could be stored in only three places, starting at the 'primary' index N calculated from the hash key. This limits the number of entries you have to test for a signature match. When none of the three matches, one of them will have to replaced, in the following order of priority:

1) If the primary entry is obsolete, as determined by its aging field, (e.g. not referenced in this and the previous search), it will be replaced. This will cause obsolete entries to be flushed from the table.
2) The entry with the lowest depth of N and N+1 is designated 'replacement candidate'. This will make the highest depths survive for the entire search (or analysis session), except for a very tiny fraction where two map next to each other.
4) If the depth to store is exactly one less than that of the replacement candidate ('undercut'), replace the latter. This gives a good protection for intermediate-depth entries, as the higher their depth, the rarer the frequency with which the search produces a depth high enough to undercut them. But they would not live forever, even in the same search.
5) If the depth of N+2 is lower or equal to that of the replacement candidate, store the new position in entry N+2.
6) Store the new position in the replacement cadidate.

Note that an overloaded table suffers a barrage of very-low-depth entries (QS, or if you don't store QS, d=1). So the lowest depth of the triple will almost always be very low. To lose its status as 'always-replace' entry, it must be lucky to get hit by a depth that is larger than the lowest of the other two. Then that one will almost immediately be overwritten by a QS/d=1 entry, and become the always-replace slot for a long time. This would lead to a perpetual increase of the second-lowest depth, making the chances that the entry holding it can be used by the search of the currently traversed sub-tree smaller and smaller. The under-cut puts a stop to that, and would occasionally lower the depth of the second-lowest entry when it is adjacent to the deepest entry. And if it is not adjacent, sooner or later the adjacent one, where all replacements go, would get hit by an equal or deeper entry, leading to overwriting of the formerly protected one by a QS/d=1 entry, and then see its own depth slowly degrade through undercutting. (The bold-face 'or equal' above tries to make this sooner rather than later.)

So 33% of the table will be used to collect the deepest nodes of the search, to hold them 'forever'. (I.e. until they no longer belong to the set of deepest nodes because the search produced enough deeper ones, or are declared obsolete by increasing the search number. The latter is something you would only do in games, but not in analysis.)

Code: Select all

  // hash store
  if&#40;slot < 0&#41; &#123; // indicates we had a miss; 'entry' was left pointing to primary slot
    if&#40;searchNr - entry->age < 2&#41; &#123; // primary entry touched in current or previous search
      slot = &#40;entry&#91;0&#93;.depth > entry&#91;1&#93;.depth&#41;;  // replacement candidate &#40;0 or 1&#41;
      if&#40;resultDepth != entry&#91;slot&#93;.depth - 1&#41; &#123; // no undercut
        if&#40;entry&#91;2&#93;.depth <= entry&#91;slot&#93;.depth&#41; slot = 2;
      &#125;
      entry += slot; // points to entry to be replaced
    &#125;
  &#125;
  entry->lock = hashKey;
  ...

Re: Interesting hash-replacement scheme

Posted: Sat Apr 08, 2017 10:06 pm
by flok
Strange: I tried a couple of replacement schemes in my program and none gave more than +/- 5 elo difference.

Re: Interesting hash-replacement scheme

Posted: Sun Apr 09, 2017 12:00 am
by hgm
Well, for one it depeds on how you test. If the hash table is large eough to hold the entire tree, hardly any replacement is taking place, so there should be no effect at all. In my experience you only start to notice the effect of limited hash memory when the number of nodes in the search is about 10 times larger than the number of TT entries. From that point on shrinking the memory further makes the time-to-depth go up. How steep it goes up depends on the quality of the replacement, and the slope is not too different for most reasonable schemes, and low on an absolute scale anyway (to double the search time you need to shrink the TT size by a factor of 4000 or so). So you onlys start to get measurable differences if you overload the table at least by a factor 100.

A truly poor replacement scheme (like depth-preferred only) can cause failure in solving Fine #70, though.

But it is not only a matter of Elo. If you are analyzing a position, it is extremely annoying when the score of a line that you only got right after an hour of analyzing has suddenly disappeared (because the TT entry was overwritten) after you investigated a small side branch, so that you have to redo it before you can back up the score further. You don't just want the program to be strong, you want it to be useful. A super-strong program that is not useful is of, well, no use...