I see... Ok!hgm wrote:Well, 1MB would be more suitable. The calculation above was for the total number of nodes. The number of different nodes in the tree (and thus the required number of hash entries) might be far smaller. Perhaps even as much as a factor 10. For one, each new iteration usually revisits the nodes of the previous iteration.
Hash table
Moderators: hgm, Rebel, chrisw
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Hash table
Daniel José - http://www.andscacs.com
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Hash table
Done the tests. The new tests with 4 MB greatly exacerbated the difference between versions, as pointed by HGM.
Between the unmodified version of Andscacs and the code I showed here:
so a nice improvement.
Also I tested 0.91102 against Hgm code, but somehow mine was more than 30 elo stronger. Take into account that Andscacs uses differently hash as commented, so probably Hgm code should be adapted in a different way.
Anyway I'm done with those tests, other than the smp one that I'm doing to be sure that is good also on smp.
Will be great if someone tests my code on Stockfish or his own engine, and explains the results here.
Thanks all!
Between the unmodified version of Andscacs and the code I showed here:
Code: Select all
1 Andscacs 0.91102 : 2859.4 2.6 5315.5 10428 51.0%
2 Andscacs 0.91095 : 2852.6 2.6 5112.5 10428 49.0%
Also I tested 0.91102 against Hgm code, but somehow mine was more than 30 elo stronger. Take into account that Andscacs uses differently hash as commented, so probably Hgm code should be adapted in a different way.
Anyway I'm done with those tests, other than the smp one that I'm doing to be sure that is good also on smp.
Will be great if someone tests my code on Stockfish or his own engine, and explains the results here.
Thanks all!
Daniel José - http://www.andscacs.com
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Hash table
The 3 cpu test gave almost the same good result than the 1cpu one, so all ok.cdani wrote: Anyway I'm done with those tests, other than the smp one that I'm doing to be sure that is good also on smp.
Daniel José - http://www.andscacs.com
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Hash table
For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.cdani wrote:Sorry, I was not clear enough. Of course I store best move, and the key, and the eval (is Andscacs ), but the three-item list I put here was about the parameters I was considering to make the decision of which of the 4 buckets to overwrite. And the question was about the best known algorithm for it. Thanksbob wrote:Best move is certainly critical. (I have to assume you store a score / bound, otherwise it is pointless).cdani wrote:Hello. I'm trying to improve again the main hash table, and specifically I'm trying to find which of the standard parameters are the most important and which less, to decide which of the 4 entries in any hash position is the best to be overwritten when the 4 are occupied.
So for each entry I have the standard
* age
* type (exact, alpha, beta)
* depth
Are there any study over any clear best strategy?
Thanks.
-
- Posts: 7000
- Joined: Thu Aug 18, 2011 12:04 pm
- Full name: Ed Schröder
Re: Hash table
Even if the depth of the current position is the shallowest of all 4 in the bucket?bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
-
- Posts: 931
- Joined: Tue Mar 09, 2010 3:46 pm
- Location: New York
- Full name: Álvaro Begué (RuyDos)
Re: Hash table
Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.Rebel wrote:Even if the depth of the current position is the shallowest of all 4 in the bucket?bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
-
- Posts: 7000
- Joined: Thu Aug 18, 2011 12:04 pm
- Full name: Ed Schröder
Re: Hash table
As for an example, in a bucket you have 4 entries, depths are 10, 10, 12 and 8. The current position has a depth of 2 (or worse a negative value in QS) and then you are sacrificing the depth=8 entry?AlvaroBegue wrote:Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.Rebel wrote:Even if the depth of the current position is the shallowest of all 4 in the bucket?bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Instinctively sound counter productive.
-
- Posts: 931
- Joined: Tue Mar 09, 2010 3:46 pm
- Location: New York
- Full name: Álvaro Begué (RuyDos)
Re: Hash table
Yes, we are all talking about the same thing.Rebel wrote:As for an example, in a bucket you have 4 entries, depths are 10, 10, 12 and 8. The current position has a depth of 2 (or worse a negative value in QS) and then you are sacrificing the depth=8 entry?AlvaroBegue wrote:Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.Rebel wrote:Even if the depth of the current position is the shallowest of all 4 in the bucket?bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Instinctively sound counter productive.
Think of a situation where at high depth we discover some tactical problem with the move we were thinking of doing. The shape of the tree will change dramatically, and there will be many entries in the hash table that have high depths but are no longer relevant. You wouldn't want to slow down the search of the new parts of the tree that need to be explored because the hash is already saturated.
In any case, as you probably know, testing trumps instinct.
-
- Posts: 7000
- Joined: Thu Aug 18, 2011 12:04 pm
- Full name: Ed Schröder
Re: Hash table
You don't know that. Tne next moves might need them and certainly the next iteration will miss those overwritten high depths.AlvaroBegue wrote:Yes, we are all talking about the same thing.Rebel wrote:As for an example, in a bucket you have 4 entries, depths are 10, 10, 12 and 8. The current position has a depth of 2 (or worse a negative value in QS) and then you are sacrificing the depth=8 entry?AlvaroBegue wrote:Yes, you want to always write the newest information somewhere. This was a measurable improvement in RuyDos.Rebel wrote:Even if the depth of the current position is the shallowest of all 4 in the bucket?bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Instinctively sound counter productive.
Think of a situation where at high depth we discover some tactical problem with the move we were thinking of doing. The shape of the tree will change dramatically, and there will be many entries in the hash table that have high depths but are no longer relevant. You wouldn't want to slow down the search of the new parts of the tree that need to be explored because the hash is already saturated.
In any case, as you probably know, testing trumps instinct.
Sure, and with my choice of wording I didn't want to imply I never did. On the contrary. Here is an idea I have used to tackle situations as in the exanple. If all 4 depth checks fail store the current position in a second (what I have called) CACHE hash table, always overwrite, no depth checks, key checks, age checks, just store. Make it big enough, say 64 Mb, or so.
-
- Posts: 373
- Joined: Wed Mar 22, 2006 10:17 am
- Location: Novi Sad, Serbia
- Full name: Karlo Balla
Re: Hash table
Essentially, it's the same thing. The slot with the lowest depth in the bucket is actually part of the always replace hash table.Rebel wrote: Sure, and with my choice of wording I didn't want to imply I never did. On the contrary. Here is an idea I have used to tackle situations as in the exanple. If all 4 depth checks fail store the current position in a second (what I have called) CACHE hash table, always overwrite, no depth checks, key checks, age checks, just store. Make it big enough, say 64 Mb, or so.
Best Regards,
Karlo Balla Jr.
Karlo Balla Jr.