Hash table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

hgm wrote:Well, 1MB would be more suitable. The calculation above was for the total number of nodes. The number of different nodes in the tree (and thus the required number of hash entries) might be far smaller. Perhaps even as much as a factor 10. For one, each new iteration usually revisits the nodes of the previous iteration.
I see... Ok!
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

Done the tests. The new tests with 4 MB greatly exacerbated the difference between versions, as pointed by HGM.

Between the unmodified version of Andscacs and the code I showed here:

Code: Select all

 1 Andscacs 0.91102    : 2859.4    2.6   5315.5   10428   51.0%
 2 Andscacs 0.91095    : 2852.6    2.6   5112.5   10428   49.0%
so a nice improvement.

Also I tested 0.91102 against Hgm code, but somehow mine was more than 30 elo stronger. Take into account that Andscacs uses differently hash as commented, so probably Hgm code should be adapted in a different way.

Anyway I'm done with those tests, other than the smp one that I'm doing to be sure that is good also on smp.

Will be great if someone tests my code on Stockfish or his own engine, and explains the results here.

Thanks all!
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

cdani wrote: Anyway I'm done with those tests, other than the smp one that I'm doing to be sure that is good also on smp.
The 3 cpu test gave almost the same good result than the 1cpu one, so all ok.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hash table

Post by bob »

cdani wrote:
bob wrote:
cdani wrote:Hello. I'm trying to improve again the main hash table, and specifically I'm trying to find which of the standard parameters are the most important and which less, to decide which of the 4 entries in any hash position is the best to be overwritten when the 4 are occupied.

So for each entry I have the standard
* age
* type (exact, alpha, beta)
* depth

Are there any study over any clear best strategy?

Thanks.
Best move is certainly critical. (I have to assume you store a score / bound, otherwise it is pointless).
Sorry, I was not clear enough. Of course I store best move, and the key, and the eval (is Andscacs :-) ), but the three-item list I put here was about the parameters I was considering to make the decision of which of the 4 buckets to overwrite. And the question was about the best known algorithm for it. Thanks
For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

Re: Hash table

Post by Rebel »

bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: Hash table

Post by AlvaroBegue »

Rebel wrote:
bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.
User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

Re: Hash table

Post by Rebel »

AlvaroBegue wrote:
Rebel wrote:
bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.
As for an example, in a bucket you have 4 entries, depths are 10, 10, 12 and 8. The current position has a depth of 2 (or worse a negative value in QS) and then you are sacrificing the depth=8 entry?

Instinctively sound counter productive.
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: Hash table

Post by AlvaroBegue »

Rebel wrote:
AlvaroBegue wrote:
Rebel wrote:
bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.
As for an example, in a bucket you have 4 entries, depths are 10, 10, 12 and 8. The current position has a depth of 2 (or worse a negative value in QS) and then you are sacrificing the depth=8 entry?

Instinctively sound counter productive.
Yes, we are all talking about the same thing. :)

Think of a situation where at high depth we discover some tactical problem with the move we were thinking of doing. The shape of the tree will change dramatically, and there will be many entries in the hash table that have high depths but are no longer relevant. You wouldn't want to slow down the search of the new parts of the tree that need to be explored because the hash is already saturated.

In any case, as you probably know, testing trumps instinct.
User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

Re: Hash table

Post by Rebel »

AlvaroBegue wrote:
Rebel wrote:
AlvaroBegue wrote:
Rebel wrote:
bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.
As for an example, in a bucket you have 4 entries, depths are 10, 10, 12 and 8. The current position has a depth of 2 (or worse a negative value in QS) and then you are sacrificing the depth=8 entry?

Instinctively sound counter productive.
Yes, we are all talking about the same thing. :)

Think of a situation where at high depth we discover some tactical problem with the move we were thinking of doing. The shape of the tree will change dramatically, and there will be many entries in the hash table that have high depths but are no longer relevant. You wouldn't want to slow down the search of the new parts of the tree that need to be explored because the hash is already saturated.
You don't know that. Tne next moves might need them and certainly the next iteration will miss those overwritten high depths.
In any case, as you probably know, testing trumps instinct.

:wink:

Sure, and with my choice of wording I didn't want to imply I never did. On the contrary. Here is an idea I have used to tackle situations as in the exanple. If all 4 depth checks fail store the current position in a second (what I have called) CACHE hash table, always overwrite, no depth checks, key checks, age checks, just store. Make it big enough, say 64 Mb, or so.
Karlo Bala
Posts: 373
Joined: Wed Mar 22, 2006 10:17 am
Location: Novi Sad, Serbia
Full name: Karlo Balla

Re: Hash table

Post by Karlo Bala »

Rebel wrote: Sure, and with my choice of wording I didn't want to imply I never did. On the contrary. Here is an idea I have used to tackle situations as in the exanple. If all 4 depth checks fail store the current position in a second (what I have called) CACHE hash table, always overwrite, no depth checks, key checks, age checks, just store. Make it big enough, say 64 Mb, or so.
Essentially, it's the same thing. The slot with the lowest depth in the bucket is actually part of the always replace hash table.
Best Regards,
Karlo Balla Jr.