Hash tables and iterative deepening

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Hash tables and iterative deepening

Post by jwes »

One advantage of iterative deepening is improving move ordering by retrieving the best move from the previous ply from the hash table. But when the number of nodes searched is significantly larger than the hash table size, positions near the leaves will most likely not be in the hash table on the next iteration, and the program will fall back on the default move ordering. One idea to help with this is, when the hash table is nearly full, only add new hash table entries only if the value is exact or it is a fail high and the fail high move is not the first move returned by the default move ordering. If the position is already in the hash table, update it of course. If the program uses multiple entries with depth preferred and replace always replacement strategies, it might also be better to switch to just depth preferred, though this would have to be tested. Do others have any ideas about this ?
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Hash tables and iterative deepening

Post by Michael Sherwin »

Most programs that I am aware of store a sorted root list of moves that is updated in some way after each iteration and then they are played in order of that sorting. No hash probe is done to retrieve the best move at the root. What do you mean by "previous ply"? Do you mean previous iteration? In a search the previous ply's best move is a move by the opposite side and I do not believe that that is what you mean.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: Hash tables and iterative deepening

Post by jwes »

I am referring to positions near the leaves. These are the ones that will be generallly overwritten when the hash table is full. I did mean previous iteration, not previous ply.
Aleks Peshkov
Posts: 892
Joined: Sun Nov 19, 2006 9:16 pm
Location: Russia

Re: Hash tables and iterative deepening

Post by Aleks Peshkov »

jwes wrote:One idea to help with this is, when the hash table is nearly full, only add new hash table entries only if the value is exact or it is a fail high and the fail high move is not the first move returned by the default move ordering.
You suggest replacement scheme that prefers old entries over new, if the old hash move improves default move ordering. But it is hard to predict what leaves will be important and what subtrees will be pruned away.

IMHO fail-high and fail-low nodes are equally important, they are opposite each other the same way as white/black and odd/even plies. I suggest another replacement condition: keep the entry that value is nearer to current principal variation evaluation, because it has better chances to became a part of future principal variation.
If the program uses multiple entries with depth preferred and replace always replacement strategies, it might also be better to switch to just depth preferred, though this would have to be tested. Do others have any ideas about this ?
I disagree. There are a few deep nodes and some of them are never used again, because its containing subtree is refuted. For example, if first move of principal variation changed, then many many old useless deep entries will fight for their place in TT with new shallow ones.
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash tables and iterative deepening

Post by hgm »

Using pure depth-preferred is generally a disaster. Especially if the table is heavily overloaded. To recover the good move ordering in nodes for which the hash entry was lost (or which are visited for the first time), one uses internal iterative deepening from that node. This rebuilds the tree from the previous iteraton at the tips, in the local (always-replace) entries of the hash table. These entries than get new hits on parallel subtrees sprouted from nodes slightly closer to the root than the undeep entries that cannot be stored permanently.

The more the table is overloaded, the more equidistributed-depth replacement wins out over other schemes.
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Hash tables and iterative deepening

Post by Michael Sherwin »

jwes wrote:I am referring to positions near the leaves. These are the ones that will be generallly overwritten when the hash table is full. I did mean previous iteration, not previous ply.
Okay sorry, but I was a little confused. I understand what you mean now. Just yesterday I tried only storing into the hash EXACT entries if the remaining depth was less than 'n',but not for the reasons that you mentioned. It did not help. I tried other related things also and all attempts to change how the hash tables worked lead to less performance.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: Hash tables and iterative deepening

Post by jwes »

Aleks Peshkov wrote:
jwes wrote:One idea to help with this is, when the hash table is nearly full, only add new hash table entries only if the value is exact or it is a fail high and the fail high move is not the first move returned by the default move ordering.
You suggest replacement scheme that prefers old entries over new, if the old hash move improves default move ordering. But it is hard to predict what leaves will be important and what subtrees will be pruned away.

IMHO fail-high and fail-low nodes are equally important, they are opposite each other the same way as white/black and odd/even plies.
Fail low nodes do not have any move ordering information in the hash table.
I suggest another replacement condition: keep the entry that value is nearer to current principal variation evaluation, because it has better chances to became a part of future principal variation.
This info is not available. You only know it failed high or failed low, not how close it is to the PV.
Chan Rasjid
Posts: 588
Joined: Thu Mar 09, 2006 4:47 pm
Location: Singapore

Re: Hash tables and iterative deepening

Post by Chan Rasjid »


Fail low nodes do not have any move ordering information in the hash table.
If QS is hashed as well, then fail soft may be implemented that may give fail low as an exact entry(Exact Fail Soft) with the corresponding "exact" move for all QS and horizon nodes. But I am not sure of the exact hit rate.

Rasjid