Transposition Table updates in Stockfish

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
hgm
Posts: 27809
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Transposition Table updates in Stockfish

Post by hgm »

Draft is a highly over-valued property of TT entries. Of course a hit on an entry of very high draft (with OK bounds) can save you a lot of work, much more than on an entry with low draft. But the point that is often overlooked, is that hash probes that need high draft are much less frequent than probes for which a low draft already suffices.

From the viewpoint of the hash slot, having a slot store a single d=20 entry on which you get 1 hit does less good than a slot that stored 1000 different d=10 entries on which you get 5 hits each. (As with effective branhing ratio 2, d=20 searches have a 1000 times bigger tree than d=10 searches, and d=10 probes are 1000 times more common than d=20 probes.) So it is in fact not obvious that preferring high drafts buys you anything at all.

A much more important effect is that the distribution of hash hits on a given position decreases in time. Immediately after the entry was created, there is a large probability it wil be quickly revisited, through a transposition of recent moves (i.e. moves close to the leaves). After that you become dependent on transpositions with moves closer to the root, and such moves have a much slower turnover.

This is why TT slots are used much more effectively by storing recently made entries in them. And this is why equi-distributed draft replacement works so much better than other replacement schemes, when space gets really tight.

Image
Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: Transposition Table updates in Stockfish

Post by Rein Halbersma »

hgm wrote:Draft is a highly over-valued property of TT entries. Of course a hit on an entry of very high draft (with OK bounds) can save you a lot of work, much more than on an entry with low draft. But the point that is often overlooked, is that hash probes that need high draft are much less frequent than probes for which a low draft already suffices.

From the viewpoint of the hash slot, having a slot store a single d=20 entry on which you get 1 hit does less good than a slot that stored 1000 different d=10 entries on which you get 5 hits each. (As with effective branhing ratio 2, d=20 searches have a 1000 times bigger tree than d=10 searches, and d=10 probes are 1000 times more common than d=20 probes.) So it is in fact not obvious that preferring high drafts buys you anything at all.

A much more important effect is that the distribution of hash hits on a given position decreases in time. Immediately after the entry was created, there is a large probability it wil be quickly revisited, through a transposition of recent moves (i.e. moves close to the leaves). After that you become dependent on transpositions with moves closer to the root, and such moves have a much slower turnover.

This is why TT slots are used much more effectively by storing recently made entries in them. And this is why equi-distributed draft replacement works so much better than other replacement schemes, when space gets really tight.
I tried to implement your scheme in my draughts program a while ago based on some of your old posts. I currently use the following 3 rules in order on my buckets (8 entries of 8 bytes each):

1) replace any empty or old entry
2) replace the first entry if the new draft undercuts the old draft by one
3) replace the shallowest entry

Is this how one should implement your equidistributed scheme using a bucket?
User avatar
Onno Garms
Posts: 224
Joined: Mon Mar 12, 2007 7:31 pm
Location: Bonn, Germany

Re: Transposition Table updates in Stockfish

Post by Onno Garms »

hgm wrote: And this is why equi-distributed draft replacement works so much better than other replacement schemes, when space gets really tight.
Thanks for you post. I understood the first paragraphs, but I'm not sure about this last sentence.

Do I get you correctly that you do not write about what to do on a hash hit but on what to do if there is no free entry in the cluster (of size 4)? Do I further get you correctly that you compared replacing the shallowest entry against selecting the entry that gets replaced randomly equidistributed?

If so, did you consider replacing the oldest entry instead of a random one? You might increment a 64 bit counter every time you store an entry and write something like bits 10-41 into the entry (if you have some bits in the entry left).
User avatar
Onno Garms
Posts: 224
Joined: Mon Mar 12, 2007 7:31 pm
Location: Bonn, Germany

Re: Transposition Table updates in Stockfish

Post by Onno Garms »

OK, I compared three schemes in Onno:

1. save upper and lower bound, keep larger depth (as in Fruit)
2. save upper and lower bound, always overwrite
3. save just one value and remember its type, always overwrite (as in Stockfish)

1 and 2 have about the same performance. 3 seems to be worse. (-1.0% in 5852 games, likelyhood of superiority against 1 is 11% or 6.7%, depending on the way I compute it.

However, in Stockfish, there isn't space left to store upper and lower bound separately because Stockfish stores static evaluation.

Did somebody else compare schemes like above?