Transposition Table - Replacement and PV
Moderators: hgm, Rebel, chrisw
-
- Posts: 27790
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Transposition Table - Replacement and PV
Always-replace is better than protecting deep entries with worthless bounds, poisoning your hash table. But the standard solution is to use the entries in pairs, and overwrite the least deep entry of the pair when you want to store a key that matches neither. Then you always keep the most recent entry (which was the most important thing), yet you protect deep entries from being overwritten, and reserved half the table for them.
-
- Posts: 104
- Joined: Thu Sep 27, 2012 2:24 am
Re: Transposition Table - Replacement and PV
You are probably right. In my real code, however, I use alphabound and betabound. I went that way because after reading various documents and reviewing various samples, I had the same question because I saw them go both ways.I think you have upper and lower bounds mixed up. If you have a score that is 'at least' beta wouldn't that be a lower bound?
Sorry if my on-the-fly code sample might be misleading at that point, it really was meant to support the proper placement of where the TT is actually indexed.
BTW, I did test out the proper placement of the TT entry and :
1. The search was faster
2. I can extract the PV from the TT
Thanks for the help
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Transposition Table - Replacement and PV
That translates to:Cheney wrote:You are probably right. In my real code, however, I use alphabound and betabound. I went that way because after reading various documents and reviewing various samples, I had the same question because I saw them go both ways.I think you have upper and lower bounds mixed up. If you have a score that is 'at least' beta wouldn't that be a lower bound?
Sorry if my on-the-fly code sample might be misleading at that point, it really was meant to support the proper placement of where the TT is actually indexed.
BTW, I did test out the proper placement of the TT entry and :
1. The search was faster
2. I can extract the PV from the TT
Thanks for the help
alphabound = upperbound
betabound = lowerbound
But feel free to use alphabound and betabound as in general that's easier for most people. That's why not seldom i post about betabounds and alfabounds as well, as i see even experienced guys mess up the terminology.
-
- Posts: 27
- Joined: Wed May 28, 2008 1:07 pm
- Location: Porsgrunn, Norway
Re: Transposition Table - Replacement and PV
I use an always replace startegy, but split the hash table in 8 parts, as in
HashTable[wtm][depth]. This way deep entries are kept much longer than when
using a flat table. A drawback is that I have to probe up to 4 times for an
entry. A plus is that the hash table dont need to be a power of 2, only each
part does.
Fine 70 is solved in 0.01s (Kb1), and score +10.99 after 6.3s d41.
Cheers, Lasse
HashTable[wtm][depth]. This way deep entries are kept much longer than when
using a flat table. A drawback is that I have to probe up to 4 times for an
entry. A plus is that the hash table dont need to be a power of 2, only each
part does.
Fine 70 is solved in 0.01s (Kb1), and score +10.99 after 6.3s d41.
Code: Select all
#define TTE_Size ((HTSize<<17)/sizeof(THTEntry))
void TTStore(TTree *tr, U32 Move, int Value, int Type, U32 ply, U32 maxply)
// Transposition table split in [wtm][depth] = [2][4]
{
THTEntry *HTAddress;
U64 Offset;
U32 Depth = maxply-ply;
if (Value > MATESCORE) Value += ply;
if (Value < -MATESCORE) Value -= ply;
U64 word1 = Move;
word1 |= ((U64)Depth&63)<<26;
word1 |= ((U64)Type&7)<<32;
word1 |= ((U64)(Value+32768))<<35;
switch (Depth) {
case 0 : Offset = 0*TTE_Size; break;
case 1 : Offset = 0*TTE_Size; break;
case 2 : Offset = 1*TTE_Size; break;
case 3 : Offset = 1*TTE_Size; break;
case 4 : Offset = 2*TTE_Size; break;
case 5 : Offset = 2*TTE_Size; break;
default : Offset = 3*TTE_Size; break;
}
Offset += 4*TTE_Size * tr->wtm;
HTAddress = HT + Offset + (tr->HashKey & (TTE_Size-1));
U64 word2 = tr->HashKey;
word2 ^= word1;
HTAddress->Data = word1;
HTAddress->HKey = word2;
}
int TTProbe(TTree *tr, U32 *Move, int *Value, U32 ply, U32 maxply, int alpha, int beta)
{
U64 Type,Depth;
THTEntry *HTAddress;
U64 Offset;
int n;
for (n=3; n>=0; n--) {// Look in all parts of TT
Offset = n*TTE_Size;
Offset += 4*TTE_Size * tr->wtm;
HTAddress = HT + Offset + (tr->HashKey & (TTE_Size-1));
U64 word1 = HTAddress->Data;
U64 word2 = HTAddress->HKey;
word2 ^= word1;
if (tr->HashKey != word2) continue;
Type=(word1>>32)&7;
*Move=word1&((1<<26)-1);
*Value=((word1>>35)&65535)-32768;
Depth=(word1>>26)&63;
if (Depth < maxply-ply) return TT_NOTHING;
if ((Type == TT_EXACT)&&((*Value < alpha)||(*Value >= beta)))
return TT_NOTHING;
if (*Value > MATESCORE) *Value -= ply;
if (*Value < -MATESCORE) *Value += ply;
tr->HTHits++;
tr->Nt++;
switch (Type) {
case TT_EXACT : return TT_EXACT;
case TT_LOWER : if (*Value>=beta) return TT_LOWER; break;
case TT_UPPER : if (*Value<=alpha) return TT_UPPER; break;
default : Print(3, "Error:TTProbe:wrong type\n");
}
}
return TT_NOTHING;
}
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Transposition Table - Replacement and PV
Bob, the way how i read Joost's posting is that this isn't a brainwave type HGM, yet that this is something that actually works for him.bob wrote:I don't think this works very well. One needs to ALWAYS store an entry, period. The only question is, what gets overwritten, not IF something gets overwritten. Otherwise you can "outsearch" the TT and reach a point where it doesn't help at all because you refuse to overwrite, and the stuff you are trying to store is important for future efficiency.Joost Buijs wrote:What works the best for me is:
Always overwrite entrys of an older table genereration.
And if the table generation is the same then:
Lowerbound and exact will always overwrite an existing entry if the depth >= existing depth or the existing entry is an upperbound.
Upperbounds will only overwrite upperbounds with lesser or equal depth.
So upperbounds wil never overwrite lowerbound or exact entrys from the same table generation.
I make multiple tests but in every case that HashStore() is called, I store that entry, period...
Also it basically says that in some huge amount of cases, which will be some percentage close to 99% i'd guess, he's overwriting.
So you're disagreeing here now about that 1%?
-
- Posts: 686
- Joined: Thu Mar 03, 2011 4:57 pm
- Location: Germany
Re: Transposition Table - Replacement and PV
Hi Lucas,
with 5 seconds you are lucky indeed with that strategy. With always replace you would of course be much faster.
Mine old laptop tells me
If not I look for a bucket with an old entry (not from current or previous search). If I found one it goes in there.
Otherwise I store it in the bucket that seems to contain the least useful data (depending on age, depth and node type).
To keep an even distribution over the buckets I XOR 2 upper bits from the hash key when I loop over the buckets. This ensures also that a position has a good chance to be found in the first bucket I look into.
Seems to work Ok for me.
Thomas...
with 5 seconds you are lucky indeed with that strategy. With always replace you would of course be much faster.
Mine old laptop tells me
I use a bucket system with 4 buckets. If I want to store a position and already find it in one of the buckets it is replaced (always).info depth 26 seldepth 28 time 16 nodes 26094 pv a1b1 a7b7 b1c1 b7c8 c1d2 c8d7 d2c3 d7c7 nps 1630874 score cp 183 hashfull 0 tbhits 0
bestmove a1b1 ponder a7b7
info time 329 nodes 106666 nps 324212
If not I look for a bucket with an old entry (not from current or previous search). If I found one it goes in there.
Otherwise I store it in the bucket that seems to contain the least useful data (depending on age, depth and node type).
To keep an even distribution over the buckets I XOR 2 upper bits from the hash key when I loop over the buckets. This ensures also that a position has a good chance to be found in the first bucket I look into.
Seems to work Ok for me.
Thomas...
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: Transposition Table - Replacement and PV
I also use set associated hashing. I have rules which determine which of the 4 entries to replace but one way or the other one of them always gets replaced.
I use a scoring system because the rules are fairly complicated. If I find an empty or matching entry it is always used, otherwise I consider the "age" of the entry, the depth and the bounds.
I experiment with this periodically but I have yet to improve on the basic scheme.
I use a scoring system because the rules are fairly complicated. If I find an empty or matching entry it is always used, otherwise I consider the "age" of the entry, the depth and the bounds.
I experiment with this periodically but I have yet to improve on the basic scheme.
tpetzke wrote:Hi Lucas,
with 5 seconds you are lucky indeed with that strategy. With always replace you would of course be much faster.
Mine old laptop tells meI use a bucket system with 4 buckets. If I want to store a position and already find it in one of the buckets it is replaced (always).info depth 26 seldepth 28 time 16 nodes 26094 pv a1b1 a7b7 b1c1 b7c8 c1d2 c8d7 d2c3 d7c7 nps 1630874 score cp 183 hashfull 0 tbhits 0
bestmove a1b1 ponder a7b7
info time 329 nodes 106666 nps 324212
If not I look for a bucket with an old entry (not from current or previous search). If I found one it goes in there.
Otherwise I store it in the bucket that seems to contain the least useful data (depending on age, depth and node type).
To keep an even distribution over the buckets I XOR 2 upper bits from the hash key when I loop over the buckets. This ensures also that a position has a good chance to be found in the first bucket I look into.
Seems to work Ok for me.
Thomas...
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Transposition Table - Replacement and PV
OK, so pretty much like Stockfish then. I'm going to experiment with that. But first of all, I need to implement an eval cache, so I can remove the eval from the TT entries, and keep the TT entries compact and efficient.Don wrote:I also use set associated hashing. I have rules which determine which of the 4 entries to replace but one way or the other one of them always gets replaced.
I use a scoring system because the rules are fairly complicated. If I find an empty or matching entry it is always used, otherwise I consider the "age" of the entry, the depth and the bounds.
I experiment with this periodically but I have yet to improve on the basic scheme.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: Transposition Table - Replacement and PV
In fact I got the idea for using set associate hashing from looking at the code of Stockfish. It was a sensible scheme and their overwrite rules are identical anyway to what I had doing for decades before - except now you have a choice of 4 entries to choose from.lucasart wrote:OK, so pretty much like Stockfish then. I'm going to experiment with that. But first of all, I need to implement an eval cache, so I can remove the eval from the TT entries, and keep the TT entries compact and efficient.Don wrote:I also use set associated hashing. I have rules which determine which of the 4 entries to replace but one way or the other one of them always gets replaced.
I use a scoring system because the rules are fairly complicated. If I find an empty or matching entry it is always used, otherwise I consider the "age" of the entry, the depth and the bounds.
I experiment with this periodically but I have yet to improve on the basic scheme.
I do not know if this is an original idea from SF or not. There are numerous hash table schemes I have seen but I think this is about as good as it gets for something simple.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Transposition Table - Replacement and PV
I don't know that it is just 1%. If I complete a search and fail low, storing that is certainly useful information, and storing fail low positions happens at every other ply along a path.diep wrote:Bob, the way how i read Joost's posting is that this isn't a brainwave type HGM, yet that this is something that actually works for him.bob wrote:I don't think this works very well. One needs to ALWAYS store an entry, period. The only question is, what gets overwritten, not IF something gets overwritten. Otherwise you can "outsearch" the TT and reach a point where it doesn't help at all because you refuse to overwrite, and the stuff you are trying to store is important for future efficiency.Joost Buijs wrote:What works the best for me is:
Always overwrite entrys of an older table genereration.
And if the table generation is the same then:
Lowerbound and exact will always overwrite an existing entry if the depth >= existing depth or the existing entry is an upperbound.
Upperbounds will only overwrite upperbounds with lesser or equal depth.
So upperbounds wil never overwrite lowerbound or exact entrys from the same table generation.
I make multiple tests but in every case that HashStore() is called, I store that entry, period...
Also it basically says that in some huge amount of cases, which will be some percentage close to 99% i'd guess, he's overwriting.
So you're disagreeing here now about that 1%?
I think you have to always store the current position, and simply try to choose intelligently what you are going to replace.