Improving hash replacing schema for analysis mode

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
User avatar
cdani
Posts: 2104
Joined: Sat Jan 18, 2014 9:24 am
Location: Andorra
Contact:

Improving hash replacing schema for analysis mode

Post by cdani » Wed Jul 05, 2017 6:49 am

Is not persistent hash, but it should be possible to improve how hash entries are replaced to have more possibilities that the engine finds again an already long found line of play, when for example going forward/backward in infinite analysis mode.

One of my first thoughts is about to add an uci option that disables incrementing age at the start of every new search (when the user does or undoes a move).

Another idea is to change the replacing schema to have more possibilities to maintain higher depth entries, but replacing always when the keys match, of course.

Probably someone has already investigated about this things and wants to share his experience.

User avatar
hgm
Posts: 23713
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Improving hash replacing schema for analysis mode

Post by hgm » Wed Jul 05, 2017 8:01 am

When your replacement scheme depends on a search number, you should never increment that search number in analyze mode. All searches done in an analysis session are usually parts of a single big 'meta-search', and you should always be prepared to return to positions that have become 'unreachable' through take-back of moves. It should be up to the user to decide when it is time to clear the hash table, in such cases. (Although it seems safe to increment the search number at the start of a new game, as the user would probably not start a new game unless he is done analyzing with a position.)

Of course an analysis session that lasts many hours would make the depth-preferred part of your hash table fill up with all the deepest entries from the session, leaving individual searches completely dependent on the replace-only slots. (Which often use only 25% of the table, or even less.) This is not really optimal, and will thus slow down the search somewhat. But always-replace is a bearable replacement scheme, and doesn't cause total disaster.

The problem is in fact exactly the same as when you would have done a single search with the duration of the analysis session. During a search the search number would not be incremented either, and after all stale entries would have been replaced, you become completely dependent on replacing entries of the current search. In that case it will also hurt at some point that the intermediate depths, too low to find a place in the depth-preferred part of the table, but still comparatively valuable, will be flushed out of the always-replace part of the table very quickly, by the flood of QS positions that bombard it. It would be much better to give these deeper entries somewhat more protection than the QS (or d=1 if you don't store QS) entries, but in the always-replace slots they have none.

Equi-distributed draft would prevent this, as each depth would have the same share of the table, rather than in proportion to the rate the search produces them. So rarely produced depths would also be rarely replaced, and survive longer. A more simplistic method is 'undercut replacement', where you allow depth d to be overwritten not only by all depth >= d, but also by d-1. It is thus a somewhat relaxed form of depth-preferred. As always, any form of depth-preferred will have to be accompanied by an always-replace slot, to make sure every position will be stored.

So with buckets of 4 entries, you could have one always-replace slot, one undercut slot, and two depth-preferred slots. An entry to be stored would replace the lowest depth, unless it is one smaller than the second-lowest depth, in which case it would replace the latter instead. This would reserve 50% of the table for storing the highest depths 'forever' (i.e. until the search number gets incremented, at which point many entries are declared 'stale', and replaced even in preference over the lowest depth). The other 50% would be devoted to speeding up the current search, as a somewhat biased always-replace table.

User avatar
cdani
Posts: 2104
Joined: Sat Jan 18, 2014 9:24 am
Location: Andorra
Contact:

Re: Improving hash replacing schema for analysis mode

Post by cdani » Wed Jul 05, 2017 1:20 pm

hgm wrote:you should never increment that search number in analyze mode.
Thanks! I will do it.
About the rest, I will try to think more on it.

User avatar
hgm
Posts: 23713
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Improving hash replacing schema for analysis mode

Post by hgm » Wed Jul 05, 2017 6:43 pm

For 4-entry buckets the following could be done

Code: Select all

replace = &#40;bucket->depth&#91;0&#93; <= bucket->depth&#91;1&#93;); // deepest depth-preferred slot &#40;0 or 1&#41;
if&#40;!STALE&#40;bucket->age&#91;replace&#93;)) &#123; // deepest is not stale&#58;
  replace ^= 1; // try least deep
  if&#40;!STALE&#40;bucket->age&#91;replace&#93;)) &#123;  // other also not stale
    if&#40;bucket->depth&#91;replace&#93; > depth&#41; &#123; // and new not good enough for depth-preferred
      replace = 2 + &#40;bucket->depth&#91;2&#93; > depth + 1&#41;; // undercut &#40;2&#41; or always-replace slot &#40;3&#41;
    &#125; else if&#40;bucket->depth&#91;replace&#93; > bucket->depth&#91;2&#93;) &#123; // replaced depth-preferred is valuable
      // copy replaced one to undercut slot before overwriting it &#40;optional&#41;
    &#125;
  &#125;
&#125;

Post Reply