Hash table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

Re: Hash table

Post by Rebel »

Karlo Bala wrote:
Rebel wrote: Sure, and with my choice of wording I didn't want to imply I never did. On the contrary. Here is an idea I have used to tackle situations as in the exanple. If all 4 depth checks fail store the current position in a second (what I have called) CACHE hash table, always overwrite, no depth checks, key checks, age checks, just store. Make it big enough, say 64 Mb, or so.
Essentially, it's the same thing. The slot with the lowest depth in the bucket is actually part of the always replace hash table.
It's not, read again.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hash table

Post by bob »

Rebel wrote:
bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
Yes. If you do not store every node as it is searched, the search will stall badly. You can end up with useless deep-draft entries and no useful entries for the current part of the tree.
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash table

Post by hgm »

'Recent' is far more important than 'deep', because it strongly affects the probability that you will revisit the position at all. It is not even true that deep entries represent more search effort than shallow entries. If all daughters (or a single cut daughter) are still in the table, a miss on a 20-ply node costs as little as a 1-ply search.
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

hgm wrote:'Recent' is far more important than 'deep'...
Knowing it, maybe there is room for improvement by deprecating hash entries somehow when there is a lot of hash pressure.
For example by counting number of hash replacements of the current age, and if they surpass some threshold after some time, just increment current age in the next iteration. I will do some attempts on this.
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: Hash table

Post by lucasart »

cdani wrote:Hello. I'm trying to improve again the main hash table, and specifically I'm trying to find which of the standard parameters are the most important and which less, to decide which of the 4 entries in any hash position is the best to be overwritten when the 4 are occupied.

So for each entry I have the standard
* age
* type (exact, alpha, beta)
* depth

Are there any study over any clear best strategy?

Thanks.
I use the following, extremely advance technique (just kidding). It's called "always replace".

Seriously, it's surprisingly hard to beat. Nothing I tried managed to beat it mesurably.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash table

Post by hgm »

cdani wrote:Knowing it, maybe there is room for improvement by deprecating hash entries somehow when there is a lot of hash pressure.
For example by counting number of hash replacements of the current age, and if they surpass some threshold after some time, just increment current age in the next iteration. I will do some attempts on this.
This is what 'equidistributed draft' aims to do. As long as deep entries still takes a negligible part of the table, they are protected by their depth. But when their number grows larger than that of the low-depth entries, so that they would start to lock those out of the table, being dependent on the always-replace slot (where they would be very short-lived), they are no longer sacred even when from the same search, and start to be replaced until their number gets low enough agan.

Effectively this partitions the table dynamically into a number of equally-sized sections, one for each depth, and arrival of a new entry of this depth will then indirectly push out another entry of that depth because that depth is now 'over-represented'). Because high-depth entries are produced by the search less frequently than low-depth entries, this means turn-over of the high-depth entries will be much slower.
User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

Re: Hash table

Post by Rebel »

bob wrote:
Rebel wrote:
bob wrote: For me, I first try to overwrite the entry with a matching signature, if such exists. Second I overwrite the entry with the shallowest depth where age is different from current age. Finally, I overwrite the entry with the shallowest depth, period. IE this is a staged "always store" type of algorithm, I will never not store the current position.
Even if the depth of the current position is the shallowest of all 4 in the bucket?
Yes. If you do not store every node as it is searched, the search will stall badly. You can end up with useless deep-draft entries and no useful entries for the current part of the tree.
Well, it's interesting enough to try and if it can beat my cache-system (see above).