Transposition Age Tutorial

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Transposition Age Tutorial

Post by D Sceviour »

bob wrote:I am not sure what you mean by "no error involved" however...
"This is not about errors" was mentioned in your previous post. As HGM also mentioned, transposition age is not necessary and age is only important for replacement. So "no error involved" means if every hash entry is completely overwritten then there is no error to introduce.

Thus, what is the advantage of using a transposition age? I suppose after a search is completed, if the draft, score, type and hashkey are all identical to the initial entries then only a transposition age needs to be updated. But where is the savings? If the search is threading, then the program still has to stop for a spinlock or lockless save just to update the transposition age.

The conclusion I am coming to is that transposition age can be completely ignored without penalty.
kbhearn
Posts: 411
Joined: Thu Dec 30, 2010 4:48 am

Re: Transposition Age Tutorial

Post by kbhearn »

D Sceviour wrote:
bob wrote:I am not sure what you mean by "no error involved" however...
"This is not about errors" was mentioned in your previous post. As HGM also mentioned, transposition age is not necessary and age is only important for replacement. So "no error involved" means if every hash entry is completely overwritten then there is no error to introduce.

Thus, what is the advantage of using a transposition age? I suppose after a search is completed, if the draft, score, type and hashkey are all identical to the initial entries then only a transposition age needs to be updated. But where is the savings? If the search is threading, then the program still has to stop for a spinlock or lockless save just to update the transposition age.

The conclusion I am coming to is that transposition age can be completely ignored without penalty.
example:
you reach a position with key 0x11111111FFFFFFFFFF with a request to search it to depth 12.

it has no match in the HT and needs to be searched from scratch. at the end of the node you go to store the entry in the hashtable. for the first case we'll say that each bucket can only store 1 entry.

the current entry in the bucket is 0x22222222FFFFFFFFFF with depth 14.

should you overwrite this entry? that's what age is for, if this is from an old search the answer is yes, you should overwrite. if the age is current though it gets more complicated. the higher depth entry saves more work per hit, but the most recently searched entry might come up by transposition more often. Hence more complicated replacement schemes still, starting with storing multiple entries per bucket. let's say you have 3 now instead of 1 and they contain:

0x22222222FFFFFFFFFF depth 14
0x33333333FFFFFFFFFF depth 16
0x44444444FFFFFFFFFF depth 16

having moved to multiple buckets its sensible that you replace at least one of those entries with the most recently searched node as its cousins may transpose to it and you have space for the depth preferred entries. So the first answer would be to replace the 14, but here is where age gets more useful - those 16s could get stuck there blocking out the lifespan of anything littler than them for the rest of the game making it harder and harder to take advantage of the 3 buckets - before long it'll be like having 1 always-replace bucket taking 3 times the space. Age allows you to mark those 16s so that when they stop getting used they go away. So if one of the 16s is 'old' you replace it BEFORE the 14.
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Transposition Age Tutorial

Post by D Sceviour »

kbhearn wrote:should you overwrite this entry? that's what age is for, if this is from an old search the answer is yes, you should overwrite.
This is trivial. No transposition age needs to be tracked to do this.
kbhearn wrote:if the age is current though it gets more complicated. the higher depth entry saves more work per hit, but the most recently searched entry might come up by transposition more often.
If the existing entry has a shallower draft, there should be no problem to simply overwrite the existing entry. As mentioned earlier, crossing a similar hash entry that has a deeper draft is common, especially when the alpha/beta bounds make radical changes. If the situation is bad enough, then the hash table is "shocked" and none of the entries are reliable. In this case overwriting is imperative and again transposition age can be ignored.
kbhearn wrote:Hence more complicated replacement schemes still, starting with storing multiple entries per bucket.
The multiple hash tables look interesting. What programs currently use this?
kbhearn
Posts: 411
Joined: Thu Dec 30, 2010 4:48 am

Re: Transposition Age Tutorial

Post by kbhearn »

D Sceviour wrote:
kbhearn wrote:should you overwrite this entry? that's what age is for, if this is from an old search the answer is yes, you should overwrite.
This is trivial. No transposition age needs to be tracked to do this.
kbhearn wrote:if the age is current though it gets more complicated. the higher depth entry saves more work per hit, but the most recently searched entry might come up by transposition more often.
If the existing entry has a shallower draft, there should be no problem to simply overwrite the existing entry. As mentioned earlier, crossing a similar hash entry that has a deeper draft is common, especially when the alpha/beta bounds make radical changes. If the situation is bad enough, then the hash table is "shocked" and none of the entries are reliable. In this case overwriting is imperative and again transposition age can be ignored.
kbhearn wrote:Hence more complicated replacement schemes still, starting with storing multiple entries per bucket.
The multiple hash tables look interesting. What programs currently use this?
If you use 1-entry always-replace which isn't a bad starting point for a new program you're right that the age is useless.

All the major programs to my knowledge use the multiple-entries per bucket approach though in order to be able to preserve some high depth entries which can save a ton of work. They use a wide array of methods to decide what to replace but do always find one thing to throw out. If one or more of the buckets is strictly depth-preferred then an age field is a must to make sure it remains useful over the course of the game.
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Transposition Age Tutorial

Post by D Sceviour »

kbhearn wrote:If you use 1-entry always-replace which isn't a bad starting point for a new program you're right that the age is useless.

All the major programs to my knowledge use the multiple-entries per bucket approach though in order to be able to preserve some high depth entries which can save a ton of work. They use a wide array of methods to decide what to replace but do always find one thing to throw out. If one or more of the buckets is strictly depth-preferred then an age field is a must to make sure it remains useful over the course of the game.
OK, Thank you. Multiple hash entries might be the first good excuse for using a transposition age. I will look into that.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Transposition Age Tutorial

Post by bob »

D Sceviour wrote:
kbhearn wrote:should you overwrite this entry? that's what age is for, if this is from an old search the answer is yes, you should overwrite.
This is trivial. No transposition age needs to be tracked to do this.
kbhearn wrote:if the age is current though it gets more complicated. the higher depth entry saves more work per hit, but the most recently searched entry might come up by transposition more often.
If the existing entry has a shallower draft, there should be no problem to simply overwrite the existing entry. As mentioned earlier, crossing a similar hash entry that has a deeper draft is common, especially when the alpha/beta bounds make radical changes. If the situation is bad enough, then the hash table is "shocked" and none of the entries are reliable. In this case overwriting is imperative and again transposition age can be ignored.
kbhearn wrote:Hence more complicated replacement schemes still, starting with storing multiple entries per bucket.
The multiple hash tables look interesting. What programs currently use this?
Not multiple hash tables, but hash tables that hash to a bucket of entries rather than to one specific entry. A bucket size of 64 bytes is a good value for X86 since that is the cache line size.. Now you have a set of hash entries that you can use similar to a "set associative cache" where you can replace any one of the entries based on age, depth, etc...

Pretty common, I have done this in Crafty for a long time. Did it in Cray Blitz over 30 years ago in fact...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Transposition Age Tutorial

Post by bob »

D Sceviour wrote:
bob wrote:I am not sure what you mean by "no error involved" however...
"This is not about errors" was mentioned in your previous post. As HGM also mentioned, transposition age is not necessary and age is only important for replacement. So "no error involved" means if every hash entry is completely overwritten then there is no error to introduce.

Thus, what is the advantage of using a transposition age? I suppose after a search is completed, if the draft, score, type and hashkey are all identical to the initial entries then only a transposition age needs to be updated. But where is the savings? If the search is threading, then the program still has to stop for a spinlock or lockless save just to update the transposition age.

The conclusion I am coming to is that transposition age can be completely ignored without penalty.
I've never said otherwise. AGE is ONLY used for replacement decisions, nothing more or less.

The savings is this. If you have entries stored in the table from old searches, and you choose to keep those due to their draft, then the table fills up with old entries. An entry from move 5 is likely NEVER going to be helpful at move 40.

For Crafty, the replacement policy goes like this:

(1) exact signature match is ALWAYS overwritten with the current entry, regardless of draft, age or anything on the existing in-table entry.

(2) if not, then examine the bucket (4 entries with Crafty) to find the set of entries where age doesn't match current value (old entries). From this set, overwrite the entry with the shallowest draft.

(3) if no old entries exist, then replace the shallowest-draft entry in the bucket, regardless of what the draft is..

step 1 purges old entries before current entries start to get overwritten. I have experimented with replacing the oldest since I have a 9 bit age, but didn't find any particular benefit as old entries tend to not survive very long anyway, given today's 100M NPS search speeds...
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Transposition Age Tutorial

Post by D Sceviour »

bob wrote:For Crafty, the replacement policy goes like this:

(1) exact signature match is ALWAYS overwritten with the current entry, regardless of draft, age or anything on the existing in-table entry.
Agreed. That is simple.
bob wrote:(2) if not, then examine the bucket (4 entries with Crafty) to find the set of entries where age doesn't match current value (old entries). From this set, overwrite the entry with the shallowest draft.
This makes sense for Crafty's multiple-entries method.
bob wrote:(3) if no old entries exist, then replace the shallowest-draft entry in the bucket, regardless of what the draft is...
It seems another oracle is needed here because depth alone cannot tell how important the hash entry might be in the future. Some of the entries might only be useful for a null move search with a shallow draft. This discussion is helpful as I am getting many ideas to try. Instead of draft, perhaps overwrite the entry that has the largest variance from the current alpha/beta bounds?
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Transposition Age Tutorial

Post by Evert »

D Sceviour wrote: It seems another oracle is needed here because depth alone cannot tell how important the hash entry might be in the future. Some of the entries might only be useful for a null move search with a shallow draft. This discussion is helpful as I am getting many ideas to try. Instead of draft, perhaps overwrite the entry that has the largest variance from the current alpha/beta bounds?
I don't see how that would help you. What determines whether an entry might cause a cut-off or not is first and fore-most the depth. Perhaps if you have a choice in different entries at the same depth, you can pick the one that would not have caused a cut-off over the one that would not. Or you choose to preserve the one that would give you a move over the one that wouldn't.

Ultimately though, you don't know (can't know) whether an entry might be useful in the future or not. So you try to preserve the entries that are most likely to safe you work - which are statistically those with a large depth.

Someone should give a second-opinion on this, but I think it's true that the main benefit of the transposition table is that they give you a move that is a good candidate, rather than that they allow you to take a cut-off without doing a search. What I mean by that is that you see a large increase in strength (or reduction in time-to-depth) when you add the move from the TT for move ordering, and a smaller improvement when you also allow for cut-offs. I'm basing that on time-to-depth measurements on my variant engine Postduif, which ran without a transposition table for a good while and still managed decent search depths, but I haven't done a proper test. Would be easy to do, of course.

EDIT:
Ok, you know what? I'm completely wrong with what I said above. I did the test using Jazz' benchmark command:

Code: Select all

Vanilla:
21556529 nodes searched
Elapsed time 11333 ms
1.90205e+06 nodes / s


No TT move:
23294774 nodes searched
Elapsed time 12493 ms
1.86462e+06 nodes / s


No TT cut-off:
27595902 nodes searched
Elapsed time 13837 ms
1.99433e+06 nodes / s


No TT:
36182929 nodes searched
Elapsed time 18021 ms
2.0078e+06 nodes / s
As expected, no TT is worst, followed by no cut-offs, followed by not using the hash move for move ordering.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Transposition Age Tutorial

Post by bob »

D Sceviour wrote:
bob wrote:For Crafty, the replacement policy goes like this:

(1) exact signature match is ALWAYS overwritten with the current entry, regardless of draft, age or anything on the existing in-table entry.
Agreed. That is simple.
bob wrote:(2) if not, then examine the bucket (4 entries with Crafty) to find the set of entries where age doesn't match current value (old entries). From this set, overwrite the entry with the shallowest draft.
This makes sense for Crafty's multiple-entries method.
bob wrote:(3) if no old entries exist, then replace the shallowest-draft entry in the bucket, regardless of what the draft is...
It seems another oracle is needed here because depth alone cannot tell how important the hash entry might be in the future. Some of the entries might only be useful for a null move search with a shallow draft. This discussion is helpful as I am getting many ideas to try. Instead of draft, perhaps overwrite the entry that has the largest variance from the current alpha/beta bounds?
Depth is a very good indicator for how much work this entry will save, as deeper drafts cut off more of a tree.

How can an entry have any variance from alpha/beta bounds with a PVS search? The scores generally tend to remain pretty stable from one search to another until someone makes a mistake...

This has been tested to death over the years, with a couple of Master's theses discussing various issues (not here at UAB, maybe somewhere in the England/etc area..