Transposition Table - Replacement and PV

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Transposition Table - Replacement and PV

Post by Don »

diep wrote:
hgm wrote:So what is your point? That if the hash table is big enough, replacement is no issue? That is pretty obvious, not?

As to your arithmetic:

131 sec ~ 2 min.
200 min ~ 3hours. Not 2 weeks.

It is not at all uncommon that people analyze positions for hours, for correspondence Chess...
Yes and other than Diep all engines are really optimized for correspondencechess as they incorporate big knowledge and do many probes in the hashtable and use more than 64 bits in hashtable to avoid collissions.

Also in all their testing at home they test real slow time controls and do not mess up in search when getting to big depths.

Really correspondence optimized you know!
thumbs up!

In the meantime why don't you continue your work at winboard to allow faster time controls than 1 second entire game?
Is that sarcasm on your part?
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Transposition Table - Replacement and PV

Post by bob »

Joost Buijs wrote:
bob wrote:
Joost Buijs wrote:What works the best for me is:

Always overwrite entrys of an older table genereration.

And if the table generation is the same then:

Lowerbound and exact will always overwrite an existing entry if the depth >= existing depth or the existing entry is an upperbound.

Upperbounds will only overwrite upperbounds with lesser or equal depth.

So upperbounds wil never overwrite lowerbound or exact entrys from the same table generation.
I don't think this works very well. One needs to ALWAYS store an entry, period. The only question is, what gets overwritten, not IF something gets overwritten. Otherwise you can "outsearch" the TT and reach a point where it doesn't help at all because you refuse to overwrite, and the stuff you are trying to store is important for future efficiency.

I make multiple tests but in every case that HashStore() is called, I store that entry, period...
Well, I tried many combinations, and this works the best for my engine.
It's possible that this will be different for Crafty.
I don't have a separate hashtable for the PV, this scheme keeps the PV alive. If you have a big enough hashtable it's not that critical anyway.
My PV hash table only stores paths, so that an EXACT hit on the PV doesn't truncate the PV, I know the path following that hit (which is the thing stored in the PV hash.) The normal has holds all the scores/bounds/flags/draft/etc info... just like everyone else. I use 4 entry buckets, which gives me 4 places to store the current position when a store is needed. I am going to store over one of those 4 entries, based on simple priority:

(1) if signatures match, I overwrite the matched entry immediately.

(2) if not, then if this position was stored in a previous search, I overwrite the entry with the smallest draft from that previous (not current) search.

(3) else I overwrite the entry with the shallowest draft, period.

I NEVER fail to store when a search ends at any ply.
Joost Buijs
Posts: 1563
Joined: Thu Jul 16, 2009 10:47 am
Location: Almere, The Netherlands

Re: Transposition Table - Replacement and PV

Post by Joost Buijs »

I do almost exactly the same, the only difference is that I don't overwrite lowerbounds or exact entrys with upperbounds, this holds only for the current search. This gives me the best time to depth.

It is just like Vincent says, I only play blitz and rapid games and my TT is rather big. For games with longer time-controls it can be different.
Last edited by Joost Buijs on Sat Dec 08, 2012 8:16 am, edited 1 time in total.
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: Transposition Table - Replacement and PV

Post by lucasart »

diep wrote:
hgm wrote:So what is your point? That if the hash table is big enough, replacement is no issue? That is pretty obvious, not?

As to your arithmetic:

131 sec ~ 2 min.
200 min ~ 3hours. Not 2 weeks.

It is not at all uncommon that people analyze positions for hours, for correspondence Chess...
Yes and other than Diep all engines are really optimized for correspondencechess as they incorporate big knowledge and do many probes in the hashtable and use more than 64 bits in hashtable to avoid collissions.

Also in all their testing at home they test real slow time controls and do not mess up in search when getting to big depths.

Really correspondence optimized you know!
thumbs up!

In the meantime why don't you continue your work at winboard to allow faster time controls than 1 second entire game?
Well, of course, people don't test changes of 5 elo at time controls of 1h/40 moves. Unless you have eternity... But that doesn't mean that testing at ultra blitz is stupid. It's what all developpers of top engines do, and even at long time controls, these improvements measured at ultra bliz work too. You can see for example Stockfish github page, and you'll realize that the whole SF project is based on ultra blitz testing. And as a result it is now 500 elo stronger than the Glaurung 2.1 code base, at *long* time controls.
The point is that to be able to measure something you have to increase the sensitivity of the measure. And that is exacely what HGM was explaining, pointing out that I didn't have an overload factor large enough (65k entries for about 210k nodes searched per second) to get any measurable improvement.
Also you mentioned that most people don't use transposition table in the qsearch. Well I do, and I can assure you that it makes a very measurable elo increase to do it (assuming you do it correctly).
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Transposition Table - Replacement and PV

Post by Don »

bob wrote:
Joost Buijs wrote:What works the best for me is:

Always overwrite entrys of an older table genereration.

And if the table generation is the same then:

Lowerbound and exact will always overwrite an existing entry if the depth >= existing depth or the existing entry is an upperbound.

Upperbounds will only overwrite upperbounds with lesser or equal depth.

So upperbounds wil never overwrite lowerbound or exact entrys from the same table generation.
I don't think this works very well. One needs to ALWAYS store an entry, period. The only question is, what gets overwritten, not IF something gets overwritten. Otherwise you can "outsearch" the TT and reach a point where it doesn't help at all because you refuse to overwrite, and the stuff you are trying to store is important for future efficiency.

I make multiple tests but in every case that HashStore() is called, I store that entry, period...
I am using a scheme that is virtually identical to his except that the decision is what to overwrite, not if I'm going to overwrite. So I basically have to score the 4 entries in the "set" and I pick the lowest "scoring" one to overwrite. So in priority I think mine works like this:

1. If entry matches, then it's the only slot that gets used.
2. If entry is empty of course it is always used.
3. older generation entry is the primary key (always use older entries)
3. next key matches Buijs logic, prefer to overwrite upper bound OR replace with Lower bound or exact score.
4. least key is the depth - within those rules prefer to overwrite lowest depth entries.

Note that even if all entries are from a different generation we still apply the other rules. I used to automatically just grab the first entry from a different generation but there is no reason not to distinguish between them too.

It could be that Bujis has a point - imagine that all entries in the set are of very high quality, current generation exact score, high depth etc .. and the entry that you want to insert is a crappy, low depth upper-bound entry. Should we forgo the replacement? I don't have an answer because I have never tried this but a replacement rule could be that the new entry should be scored too and only if it beats the one we want to replace should it be added.

I tend to thing you are probably right however because the most recent entries in the hash table tend to have much higher relevance than some random entry.

Something else I wanted to someday experiment with (which is highly speculative and probably won't work) is to actually give some kind of preference to how recent the entry is. I don't mean the generation or age but perhaps the node counter when the entry was stored. If it's relevant to store the "generation" it seems like this is just a abstraction of the same concept. I would of course still give generation and depth precedence.

Anyway, this is definitely not "low hanging fruit", even though I allocate a small amount of time to trying highly speculative low-payoff things this probably will never make it on my list ....

Don
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
Joost Buijs
Posts: 1563
Joined: Thu Jul 16, 2009 10:47 am
Location: Almere, The Netherlands

Re: Transposition Table - Replacement and PV

Post by Joost Buijs »

Don wrote: Something else I wanted to someday experiment with (which is highly speculative and probably won't work) is to actually give some kind of preference to how recent the entry is. I don't mean the generation or age but perhaps the node counter when the entry was stored. If it's relevant to store the "generation" it seems like this is just a abstraction of the same concept. I would of course still give generation and depth precedence.
Don
It seems like a good idea to add the node count into the formula. I never thought about this. I am certainly going to try this some day.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Transposition Table - Replacement and PV

Post by diep »

Joost Buijs wrote:
Don wrote: Something else I wanted to someday experiment with (which is highly speculative and probably won't work) is to actually give some kind of preference to how recent the entry is. I don't mean the generation or age but perhaps the node counter when the entry was stored. If it's relevant to store the "generation" it seems like this is just a abstraction of the same concept. I would of course still give generation and depth precedence.
Don
It seems like a good idea to add the node count into the formula. I never thought about this. I am certainly going to try this some day.
Losing 5 bytes?

If you try to lose less say 6 bits, you can store iteration depth.

What i use is a bit or 8 (have to look it up) that is a sum of depthleft with number of searches performed so far in the game (or analysis).
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Transposition Table - Replacement and PV

Post by Don »

Joost Buijs wrote:
Don wrote: Something else I wanted to someday experiment with (which is highly speculative and probably won't work) is to actually give some kind of preference to how recent the entry is. I don't mean the generation or age but perhaps the node counter when the entry was stored. If it's relevant to store the "generation" it seems like this is just a abstraction of the same concept. I would of course still give generation and depth precedence.
Don
It seems like a good idea to add the node count into the formula. I never thought about this. I am certainly going to try this some day.
Of course the question is what formula to use. I think that more recently stored entries should be given some weight but it's not clear how this is to be integrated into the other factors. We obviously give it the ultimate preference when we decide that we will ALWAYS overwrite some entry regardless of quality so the concept could be extended.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
User avatar
Rebel
Posts: 6993
Joined: Thu Aug 18, 2011 12:04 pm

Re: Transposition Table - Replacement and PV

Post by Rebel »

diep wrote:
Joost Buijs wrote:
Don wrote: Something else I wanted to someday experiment with (which is highly speculative and probably won't work) is to actually give some kind of preference to how recent the entry is. I don't mean the generation or age but perhaps the node counter when the entry was stored. If it's relevant to store the "generation" it seems like this is just a abstraction of the same concept. I would of course still give generation and depth precedence.
Don
It seems like a good idea to add the node count into the formula. I never thought about this. I am certainly going to try this some day.
Losing 5 bytes?

If you try to lose less say 6 bits, you can store iteration depth.
Funny that you mention this. I store the iteration depth as well. If iteration depths do not match then the depths in the tree should match else I don't reward a transposition. These 2 extra checks and 2 extra bytes in the TT (compared to the standard approach) slow down my search with 10% but the performance (in matches) although not big are consistently better.

I am not saying I understand the logic, I don't.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Transposition Table - Replacement and PV

Post by diep »

Rebel wrote:
diep wrote:
Joost Buijs wrote:
Don wrote: Something else I wanted to someday experiment with (which is highly speculative and probably won't work) is to actually give some kind of preference to how recent the entry is. I don't mean the generation or age but perhaps the node counter when the entry was stored. If it's relevant to store the "generation" it seems like this is just a abstraction of the same concept. I would of course still give generation and depth precedence.
Don
It seems like a good idea to add the node count into the formula. I never thought about this. I am certainly going to try this some day.
Losing 5 bytes?

If you try to lose less say 6 bits, you can store iteration depth.
Funny that you mention this. I store the iteration depth as well. If iteration depths do not match then the depths in the tree should match else I don't reward a transposition. These 2 extra checks and 2 extra bytes in the TT (compared to the standard approach) slow down my search with 10% but the performance (in matches) although not big are consistently better.

I am not saying I understand the logic, I don't.
It makes sense for Rebel, not for any other program, to use strict transposition rules like this.

Your selective search, after the brute force depth, it is dependant upon the iteration depth, that's why.

Now if we clear hashtable and just search 1 position you still wouldn't notice it, maybe. Yet if we start overwriting previous searches, suddenly at iteration I depthleft D we will get already a cutoff from a previous search at the same position with depthleft D and iteration depth I' which was a lot larger, which was cutoff right after somehow as we are in the selective search depth there.

This would be a dubious cutoff. In short you would need complicated logics involving both depths stored in transpositiontable.

Probably you found a clever simple shortcut there which loses you less (though 10% timereduction) yet works well enough.