Page 1 of 1

Transposition Table Oddity

Posted: Sun Sep 28, 2014 7:09 pm
by cetormenter
I have been in the process or rewriting Nirvana and during this I have been able to look at everything a little more closely. So I decided to retest something that always concerned me.

I have looked at the source code of multiple engines and have noticed that they save information from every single node (All,Cut, and PV nodes). However through testing, saving all nodes to the TT is a very clear loss for Nirvana. This leads me to believe that either there is a bug in my transposition table implementation or perhaps I am not using the information saved in the table to its fullest. My replacement scheme favors high depth, PV nodes that are from the current search.

A few stats that I obtained during this testing is that a TT cutoff occurs approximately 5-6% of the time and when a cutoff does not occur a TT move is able to be found 12-15% of the time. This means that TT entries from PV and Cut nodes are used in a lot more cases than All nodes because as we all know there is no best move in an All node.

Here is a test that I just ran between two versions of Nirvana. One that saves entries from every node (NirvanachessALL) and another that only saves the entry if there is a Best move. The testing conditions were 6" + 0.05" with a 2 MB hash table size in order to create some hash pressure.

Code: Select all

Rank Name               Elo    +    - games score oppo. draws 
   1 Nirvanachess      18    7    7  5500   56%   -18   43% 
   2 NirvanachessALL  -18    7    7  5500   44%    18   43% 

Has anybody else noticed something similar? If not and saving All nodes does help then is there something that you do specifically with All nodes in order to squeeze a little bit more mileage out of these entries?

Re: Transposition Table Oddity

Posted: Sun Sep 28, 2014 7:13 pm
by bob
cetormenter wrote:I have been in the process or rewriting Nirvana and during this I have been able to look at everything a little more closely. So I decided to retest something that always concerned me.

I have looked at the source code of multiple engines and have noticed that they save information from every single node (All,Cut, and PV nodes). However through testing, saving all nodes to the TT is a very clear loss for Nirvana. This leads me to believe that either there is a bug in my transposition table implementation or perhaps I am not using the information saved in the table to its fullest. My replacement scheme favors high depth, PV nodes that are from the current search.

A few stats that I obtained during this testing is that a TT cutoff occurs approximately 5-6% of the time and when a cutoff does not occur a TT move is able to be found 12-15% of the time. This means that TT entries from PV and Cut nodes are used in a lot more cases than All nodes because as we all know there is no best move in an All node.

Here is a test that I just ran between two versions of Nirvana. One that saves entries from every node (NirvanachessALL) and another that only saves the entry if there is a Best move. The testing conditions were 6" + 0.05" with a 2 MB hash table size in order to create some hash pressure.

Code: Select all

Rank Name               Elo    +    - games score oppo. draws 
   1 Nirvanachess      18    7    7  5500   56%   -18   43% 
   2 NirvanachessALL  -18    7    7  5500   44%    18   43% 

Has anybody else noticed something similar? If not and saving All nodes does help then is there something that you do specifically with All nodes in order to squeeze a little bit more mileage out of these entries?
Something has to be broken. What you are basically saying here is that when you reach a node, if you could somehow look into the future and discover that you will either fail high (return beta), fail low (return alpha) or return a true score, and then you use that information here to avoid the search, it somehow hurts your performance. When you think about it, that is impossible, because the hash table info is a "look into the future" if it is done correctly. If you screw up the draft, or verifying that a bound can be used before using it, then yes, it will fail. But it has to be correct, since ALL you are doing is eliminating duplicate work, no speculation or anything.

Re: Transposition Table Oddity

Posted: Sun Sep 28, 2014 8:15 pm
by cetormenter
A loss of elo does not necessarily mean that something is harmful. It could be, as I stated, that TT entries from All nodes are simply not as helpful due to their inability to help at nodes where draft > ttdraft. In PV or Cut nodes you are still able to gleam some information.

It is not too far fetched to see cases where a beta cutoff is produced by a quiet move with see < 0. Most likely this move will have a terrible history score as in the majority of position this would just lose. However due to a tactic, in this particular position it wins. Without that entry being saved in the table you would have to search through many moves only to finally get to this move in particular. Then you would have to perform one (maybe two depending if it is a PV node or not) researches. When you could have simply just searched this move first with the help of the ttentry.

Re: Transposition Table Oddity

Posted: Sun Sep 28, 2014 9:44 pm
by bob
cetormenter wrote:A loss of elo does not necessarily mean that something is harmful. It could be, as I stated, that TT entries from All nodes are simply not as helpful due to their inability to help at nodes where draft > ttdraft. In PV or Cut nodes you are still able to gleam some information.

It is not too far fetched to see cases where a beta cutoff is produced by a quiet move with see < 0. Most likely this move will have a terrible history score as in the majority of position this would just lose. However due to a tactic, in this particular position it wins. Without that entry being saved in the table you would have to search through many moves only to finally get to this move in particular. Then you would have to perform one (maybe two depending if it is a PV node or not) researches. When you could have simply just searched this move first with the help of the ttentry.
ALL nodes have to have a tt hit verified. If you have a LOWER entry, you have to make certain that LOWER is >= current beta before you can take the fail high. If you have an UPPER entry, you have to make certain that UPPER is <= alpha. If you have an EXACT entry, you have to make certain that the EXACT value is between current alpha and beta. And for all you have to make sure tt_draft >= current depth.

Your example doesn't work. You assume that the ttable says "fail high" but that a search would not. How could that happen? Since you did the SAME search before storing that ttable entry in the first place. And you should ALWAYS search the ttable move first anyway, regardless of the depth or bound values.

I don't see how this would hurt. Yes, on occasion, it might produce a bigger search. But overall it should help if something is not broken. I could certainly run a cluster test with ALL (UPPER) tt entries not stored, or stored but not used. But if you do things right, even an ALL entry can have a best move since you might have overwritten an older entry with the same signature that did have a best move, and you should keep that and store it with the new entry.

Re: Transposition Table Oddity

Posted: Sun Sep 28, 2014 9:44 pm
by Evert
cetormenter wrote:My replacement scheme favors high depth, PV nodes that are from the current search.
So how does this work exactly? Do you overwrite high-depth entries or PV entries when a new position hashes there, or do you keep the old entries in the table at the expense of the new entry?

What exactly is your test to determine if an entry is usable or not?

Re: Transposition Table Oddity

Posted: Sun Sep 28, 2014 10:47 pm
by cetormenter
bob wrote: ALL nodes have to have a tt hit verified. If you have a LOWER entry, you have to make certain that LOWER is >= current beta before you can take the fail high. If you have an UPPER entry, you have to make certain that UPPER is <= alpha. If you have an EXACT entry, you have to make certain that the EXACT value is between current alpha and beta. And for all you have to make sure tt_draft >= current depth.

Your example doesn't work. You assume that the ttable says "fail high" but that a search would not. How could that happen? Since you did the SAME search before storing that ttable entry in the first place. And you should ALWAYS search the ttable move first anyway, regardless of the depth or bound values.

I don't see how this would hurt. Yes, on occasion, it might produce a bigger search. But overall it should help if something is not broken. I could certainly run a cluster test with ALL (UPPER) tt entries not stored, or stored but not used. But if you do things right, even an ALL entry can have a best move since you might have overwritten an older entry with the same signature that did have a best move, and you should keep that and store it with the new entry.
Of course I am making sure that the upper and lower values are as you call it "tt hit verified".

Perhaps I did not explain what I meant correctly. I was not talking about any type of search instability where the tt would fail high but a search from that position would fail low. I was simply trying to think of an example where if you had a position that previously failed high but were unable to have access to the previous TT move due to the entry being overwritten in favor of a fail low.

I think perhaps my issue lies within your third point here. I am not preserving the tt move during overwrites. I am not quite sure why I did not think of doing this before because this would make it so that you are getting as much benefit out of every entry.