Transposition Table replacement schemes

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
AndrewGrant
Posts: 485
Joined: Tue Apr 19, 2016 4:08 am
Location: U.S.A
Full name: Andrew Grant
Contact:

Transposition Table replacement schemes

Post by AndrewGrant » Thu May 05, 2016 12:27 am

Hey all,

My program is currently making the mistake of discarding all transpositions from the previous search between each move.
Obviously this can be remedied by storing the age of the transposition and replacing. But I have a few questions about that.

My initial idea was a scheme like such:

Each bucket holds two entries, 1 depth preferred, 1 always replaced. An aged transposition is always replaced even if it has a greater depth.

I realize now that I should be getting these buckets to fit into a single cache line, and could store anywhere from 4 to 6 entries per bucket (depending on how large I want my key to be).
So my question is, how should the replacement scheme work when there are more than 2 entries per bucket?

My current thought would be...

1. Depth prefered, ignoring age
2. replace if newDepth + N*oldAge >= oldDepth
3. replace if newDepth + 2N*oldAge >= oldDepth
4. Always replace
5. Always replace

It seems like a poor idea to always replace entries that came from a different search. Even if some positions will no longer occur, or won't likely appear in the tree.
Has anyone toyed with some variant of what I proposed, taking into account how old the entry is?
Maybe even the bound type?
Additionally, could we store the age of the current search in an entry whenever an entry is accessed? That way we can throw out any entries that were not used in the previous search?

bob
Posts: 20550
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Transposition Table replacement schemes

Post by bob » Thu May 05, 2016 3:23 am

AndrewGrant wrote:Hey all,

My program is currently making the mistake of discarding all transpositions from the previous search between each move.
Obviously this can be remedied by storing the age of the transposition and replacing. But I have a few questions about that.

My initial idea was a scheme like such:

Each bucket holds two entries, 1 depth preferred, 1 always replaced. An aged transposition is always replaced even if it has a greater depth.

I realize now that I should be getting these buckets to fit into a single cache line, and could store anywhere from 4 to 6 entries per bucket (depending on how large I want my key to be).
So my question is, how should the replacement scheme work when there are more than 2 entries per bucket?

My current thought would be...

1. Depth prefered, ignoring age
2. replace if newDepth + N*oldAge >= oldDepth
3. replace if newDepth + 2N*oldAge >= oldDepth
4. Always replace
5. Always replace

It seems like a poor idea to always replace entries that came from a different search. Even if some positions will no longer occur, or won't likely appear in the tree.
Has anyone toyed with some variant of what I proposed, taking into account how old the entry is?
Maybe even the bound type?
Additionally, could we store the age of the current search in an entry whenever an entry is accessed? That way we can throw out any entries that were not used in the previous search?
Last question first. Answer is "yes". I suspect everyone does this. If you do a probe and get a signature match, update the age since it was useful in this search.

For a bucket, I would propose something different.

Take the bucket as a set of N entries. For each of the following, you scan the entire set and replace

(1) first pass is replace any entry where the signatures match.
(2) second pass is replace any entry from an older search;
(3) final pass is to replace the entry with the shallowest draft, but you ALWAYS replace something if you get to this point.

AndrewGrant
Posts: 485
Joined: Tue Apr 19, 2016 4:08 am
Location: U.S.A
Full name: Andrew Grant
Contact:

Re: Transposition Table replacement schemes

Post by AndrewGrant » Thu May 05, 2016 8:00 am

I like the scheme you have proposed. I'll work on implementing this and testing the results. Not throwing out the whole table will most certainly be an improvement, no matter how poor a replace scheme I implement.

Thanks for the help

User avatar
hgm
Posts: 23714
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Transposition Table replacement schemes

Post by hgm » Thu May 05, 2016 8:35 am

The hash table would be optimally used if it kept equal numbers of positions for each depth. This because the efficiency at each level decreases logarithmically with the number of positions stored for it. Levels closer to the root are scanned through much more slowly than levels near the tips, however, so always-replace would lead to a very large over-representation of low-depth nodes. This is why you try to favor the higher depth through some depth-preferred scheme.

Most people have buckets of 4, and then replace (in absence of aged entries) the one that has the lowest depth of those 4, irrespective of whether the stored depth is lower or higher. What worked well for me (with 5 entries per bucket) is have two pairs of dept-preferred and a single always-replace entry per bucket. The newly arriving position then replaces the least deep of the pair it hashes to, if it is deeper. If not, it goes to the always-replace entry. There is no need to keep an age counter for the always-replace entry, so with 64 bytes you can have four 1-byte age counters, and 60/5 = 12 bytes per entry. With 10-byte entries you could fit in two always-replace entries (i.e. give each depth-preferred pair its own, rather than a shared one). You would only need to test 3 entries on probing.

With rather minimal 9-byte entries (4 signature, 2 score, 2 move/bound type, 1 depth) you could cram 7 entries into a bucket, but you would only have 1 byte left for age counting.

Note that age-based replacement is totally crippling for analysis: you will lose all results deeper along your main line every time you visit a variation. This is an argument for using quantity-based replacement:selectively replace those depth that occupy too large a fraction of the table. With depth-preferred replacement high depth tend to conquer the table in the long run; they are produced only slowly, but because of their depth they are replaced even more slowly. Once their number gets unbearably high (say higher than the number of d=1 entries, you just start overwriting them to put a stop to it. You can do that by keeping a histogram of the frequency of all depths in the depth-preferred table, which, unlike age counters, does not require any space in the hash entry itself.

A poor-man's approximation to this is under-cut replacement: in addition to the normal depth-preferred stuff you replace the entry is the slot you initially hash to when it has a draft that is exactly one higher than the entry you want to store, with a certain probability (say once out of every four probes). This way no depth can ever become immune to replacement.

AndrewGrant
Posts: 485
Joined: Tue Apr 19, 2016 4:08 am
Location: U.S.A
Full name: Andrew Grant
Contact:

Re: Transposition Table replacement schemes

Post by AndrewGrant » Thu May 05, 2016 9:45 pm

Can you clarify something for me.

When you say "two pairs of dept-preferred" and "The newly arriving position then replaces the least deep of the pair it hashes to" You are saying to use one bit from that hash that is not apart of the index to determine the appropriate 'slot' from the two pairs?

So Bits 0-24 = key
Bit 25 determines pair 1 or pair 2?

Thank you for the response, and thank you for your programs. MicroMax has, at least when I began really focusing on my engine, one of my goto test opponents.

Post Reply