Lecture notes for April 24, 1997
Hashing and Move Ordering
by David Eppstein, Dept. of Information & Computer Science, UC Irvine
"Don't hash the positions very near the leaves of the search tree, there are too many of them (they'll take away hash table space from more valuable positions) and you're not saving much time by avoiding searching them."
Is this statement true and proven by anyone
Hashing question
Moderators: hgm, Rebel, chrisw
-
- Posts: 9
- Joined: Mon Feb 08, 2016 7:06 pm
- Location: Nova Scotia, Canada
-
- Posts: 931
- Joined: Tue Mar 09, 2010 3:46 pm
- Location: New York
- Full name: Álvaro Begué (RuyDos)
Re: Hashing question
I think that's false: There are engines that even store QS nodes in the transposition table! But of course it might have been true in 1997, when memory was less abundant.kdjones wrote:Lecture notes for April 24, 1997
Hashing and Move Ordering
by David Eppstein, Dept. of Information & Computer Science, UC Irvine
"Don't hash the positions very near the leaves of the search tree, there are too many of them (they'll take away hash table space from more valuable positions) and you're not saving much time by avoiding searching them."
Is this statement true and proven by anyone
This is one of the things that you need to try and test thoroughly.
Last edited by AlvaroBegue on Tue Feb 23, 2016 2:29 am, edited 1 time in total.
-
- Posts: 411
- Joined: Thu Dec 30, 2010 4:48 am
Re: Hashing question
No. I believe stockfish hashes all the way to qsearch (i think some engines do omit qsearch though). Replacement strategies can make sure that mostly you're only replacing other low depth nodes and while it's not saving much effort per hit, you'll be getting a lot more hits to make up for that.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Hashing question
"near the leaves" sounds reasonable. If you consider three types of nodes; (a) interior nodes (depth > 0), (b) frontier nodes (depth = 0) and then leaf nodes (anything beyond depth = 0 nodes, then you might discover that probing the hash table saves you a bit of tree space, but it also has a significant cost for the memory access. I have not tested in quite a while, but the last time I did, I found that hashing in q-search reduced the tree search by about 10%, but then it slowed the search by about 10%, so it was a wash.kdjones wrote:Lecture notes for April 24, 1997
Hashing and Move Ordering
by David Eppstein, Dept. of Information & Computer Science, UC Irvine
"Don't hash the positions very near the leaves of the search tree, there are too many of them (they'll take away hash table space from more valuable positions) and you're not saving much time by avoiding searching them."
Is this statement true and proven by anyone
I elected to not hash in q-search, because if you begin to overwrite hash entries at a significant rate, that also has a detrimental effect that adds on to the already found 10% performance loss.
Best advice? Test. YMMV. I hash up to but not including depth=0 nodes, which are the first layer of the q-search.
-
- Posts: 27808
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Hashing question
The point is that you also don't save much time by avoiding searching positions close to the root, when their daughter nodes are all in the hash table. You can then get a d=20 result by just doing a d=1 search to probe all the daughters.kdjones wrote:"Don't hash the positions very near the leaves of the search tree, there are too many of them (they'll take away hash table space from more valuable positions) and you're not saving much time by avoiding searching them."
So indeed, the statement is pure nonsense. No matter how small the hash table, the available space is used most effectively by distributing it equally over all drafts, i.e. as many d=1 positions as d=15 positions. A valid reason for not storing below a certain depth could be memory bandwidth. OTOH, a more effective way to combat that bottleneck might be to have a dedicated hash table for leave nodes that fits entirely in cache.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Hashing question
I've experimented with the dedicated hash table, both for normal and pawn hashes, and while it actually offers a little improvement for a single-threaded search, it is a performance killer for SMP since you now have N copies of this with N threads. You are primarily using L3 for this since L1 and L2 are likely too small, and since L3 is shared, the pressure on it goes way up. In my case with dual 10 core processors, even with 24mb of L3, 10 times anything reasonable becomes pretty large, particularly if you do this for both pawns, and frontier hashing. Not to mention that some use per-thread history counters (I don't) not to mention all the per-thread board state and search state info.hgm wrote:The point is that you also don't save much time by avoiding searching positions close to the root, when their daughter nodes are all in the hash table. You can then get a d=20 result by just doing a d=1 search to probe all the daughters.kdjones wrote:"Don't hash the positions very near the leaves of the search tree, there are too many of them (they'll take away hash table space from more valuable positions) and you're not saving much time by avoiding searching them."
So indeed, the statement is pure nonsense. No matter how small the hash table, the available space is used most effectively by distributing it equally over all drafts, i.e. as many d=1 positions as d=15 positions. A valid reason for not storing below a certain depth could be memory bandwidth. OTOH, a more effective way to combat that bottleneck might be to have a dedicated hash table for leave nodes that fits entirely in cache.