Null-move pruning and the hash table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
rvida
Posts: 481
Joined: Thu Apr 16, 2009 12:00 pm
Location: Slovakia, EU

Re: Null-move pruning and the hash table

Post by rvida »

micron wrote: Nothing interesting. Just the difference between:
SaveInHashTable( alpha, ..., depth ); // oops
SaveInHashTable( alpha, ..., depth - R );
Hm, it does not make sense... You should store with depth, not depth-R.

Imagine that you revisit this node with exactly same remaining depth and you get a hash hit. You will be not able to cut off (because of insufficient draft). Then you will repeat exactly the same null move search again - which will fail high again. You end up with whole bunch of nodes wasted on repeated search.
micron
Posts: 155
Joined: Mon Feb 15, 2010 9:33 am
Location: New Zealand

Re: Null-move pruning and the hash table

Post by micron »

micron wrote:SaveInHashTable( alpha, ..., depth ); // oops
SaveInHashTable( alpha, ..., depth - R );
michiguel wrote:That is what I guessed. Unless I am confused about what you are doing, you should be saving SaveInHashTable( alpha, ..., depth );
You save in the hashtable the same conditions you faced when you entered search(). Otherwise, the next time you enter this node, you would not be able have a hash hit.
rvida wrote:Hm, it does not make sense... You should store with depth, not depth-R.
Imagine that you revisit this node with exactly same remaining depth and you get a hash hit. You will be not able to cut off (because of insufficient draft). Then you will repeat exactly the same null move search again - which will fail high again. You end up with whole bunch of nodes wasted on repeated search.
Your persuasive verbal arguments are, alas, not supported by my tests. After a null-move fails high, I have tried three options:
(1) return without hashing the result
(2) SaveInHashTable( alpha, ..., depth );
(3) SaveInHashTable( alpha, ..., depth - R );

In tests with many different searches, #2 is sufficiently slower than the other options that I regard it as a bug. A verbal argument might be "it's naughty to put untruthful depth claims in the TT; if caught, you will be punished for lying".
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Null-move pruning and the hash table

Post by bob »

micron wrote:
micron wrote:SaveInHashTable( alpha, ..., depth ); // oops
SaveInHashTable( alpha, ..., depth - R );
michiguel wrote:That is what I guessed. Unless I am confused about what you are doing, you should be saving SaveInHashTable( alpha, ..., depth );
You save in the hashtable the same conditions you faced when you entered search(). Otherwise, the next time you enter this node, you would not be able have a hash hit.
rvida wrote:Hm, it does not make sense... You should store with depth, not depth-R.
Imagine that you revisit this node with exactly same remaining depth and you get a hash hit. You will be not able to cut off (because of insufficient draft). Then you will repeat exactly the same null move search again - which will fail high again. You end up with whole bunch of nodes wasted on repeated search.
Your persuasive verbal arguments are, alas, not supported by my tests. After a null-move fails high, I have tried three options:
(1) return without hashing the result
(2) SaveInHashTable( alpha, ..., depth );
(3) SaveInHashTable( alpha, ..., depth - R );

In tests with many different searches, #2 is sufficiently slower than the other options that I regard it as a bug. A verbal argument might be "it's naughty to put untruthful depth claims in the TT; if caught, you will be punished for lying".
Just means you have a bug. You want to store exactly what will be useful the next time you reach this _same_ position. Which is with depth, not depth-R...
Mincho Georgiev
Posts: 454
Joined: Sat Apr 04, 2009 6:44 pm
Location: Bulgaria

Re: Null-move pruning and the hash table

Post by Mincho Georgiev »

Now, the most interesting part. After applying the null_move_trick to the hash table and removing some already useless restrictions from IID (again regarding null move):

Code: Select all

    Program                          Elo    +   -   Games   Score   Av.Op.  Draws

  1 P_019_nulltrick                : 2409   25  25   500    52.5 %   2391   32.2 %
  2 P_019_unchanged                : 2391   25  25   500    47.5 %   2409   32.2 %
So for me, that discussion was really helpful.
vladstamate
Posts: 161
Joined: Thu Jan 08, 2009 9:06 pm
Location: San Francisco, USA

Re: Null-move pruning and the hash table

Post by vladstamate »

I am actually getting same behavior: storing depth instead of depth-R seems to slow down my engine. This is on a limited test data. I am running a longer match now between the two versions.

I wonder what the bug can be...

Regards,
Vlad.
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Null-move pruning and the hash table

Post by hgm »

I am not sure what you all are doing. We are talking here about storing the node from which the null move was tried, in case the null move failed high?
Mincho Georgiev
Posts: 454
Joined: Sat Apr 04, 2009 6:44 pm
Location: Bulgaria

Re: Null-move pruning and the hash table

Post by Mincho Georgiev »

hgm wrote:I am not sure what you all are doing. We are talking here about storing the node from which the null move was tried, in case the null move failed high?
That also.
vladstamate
Posts: 161
Joined: Thu Jan 08, 2009 9:06 pm
Location: San Francisco, USA

Re: Null-move pruning and the hash table

Post by vladstamate »

Hi,

Yes, I am trying to store the hash entry of the position in which I have tried null move and the result came back higher than current beta. Here is what I do

Code: Select all

	/* conduct a null-move search if it is legal and desired */
	if (bOkNullTT &&     // If the TT says there is a chance we might fail high
		!bPVNode &&
		(depth > R+1) && // if there is enough depth available
	    pStart->isNullMoveOk() && // if there is at least a piece
	    !pStart->isChess(pStart->m_iSideToMove) )
	{
		pStart->makeNullMove();
		/* null-move search with minimal window around beta */
		iScore = -search(pStart, -iBeta, -iBeta + 1, depth - R - 1, 0);
		pStart->unmakeNullMove();
		if (iScore >= iBeta && !m_bOutOfTime) /* cutoff in case of fail-high */
		{
			g_kSearchHash.store(pStart->m_kHashKey, 0, iScore, LOWER, depth-R-1, 0); 
			return iScore;
		}
	}
One difference from the above posts is that I store the score returned by the null search and not alpha, and the node type as such is LOWER (not UPPER).

Can you see maybe something I do wrong in the above code?

Thanks in advance,
Vlad.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Null-move pruning and the hash table

Post by bob »

vladstamate wrote:Hi,

Yes, I am trying to store the hash entry of the position in which I have tried null move and the result came back higher than current beta. Here is what I do

Code: Select all

	/* conduct a null-move search if it is legal and desired */
	if (bOkNullTT &&     // If the TT says there is a chance we might fail high
		!bPVNode &&
		(depth > R+1) && // if there is enough depth available
	    pStart->isNullMoveOk() && // if there is at least a piece
	    !pStart->isChess(pStart->m_iSideToMove) )
	{
		pStart->makeNullMove();
		/* null-move search with minimal window around beta */
		iScore = -search(pStart, -iBeta, -iBeta + 1, depth - R - 1, 0);
		pStart->unmakeNullMove();
		if (iScore >= iBeta && !m_bOutOfTime) /* cutoff in case of fail-high */
		{
			g_kSearchHash.store(pStart->m_kHashKey, 0, iScore, LOWER, depth-R-1, 0); 
			return iScore;
		}
	}
One difference from the above posts is that I store the score returned by the null search and not alpha, and the node type as such is LOWER (not UPPER).

Can you see maybe something I do wrong in the above code?

Thanks in advance,
Vlad.
Here's what should happen.

You enter Search() with a remaining depth of "DEPTH". Whatever that might be is not important.

Your first task is to probe the hash table to see if you can find an entry that matches, with a draft (entry depth remaining) that is >= DEPTH. If not, you are done. If so, do whatever the entry suggests.

Assuming the entry did not allow you to terminate the search, your next stop is to do a null-move search to DEPTH - R - 1 to see if that fails high. If so, you store a hash entry for this position, with draft == DEPTH (not DEPTH - R) showing that the result is a "lower bound" (fail high condition). You must use "DEPTH" because suppose this null-move search fails high, and you later reach this same position again. Wouldn't you want to get a hash hit and fail high without going thru the null-move search overhead again?

If the null-move search does not fail high, you do a normal search, and when it finishes (either via a fail high, a fail low, or an exact score, you store that result along with DEPTH, so that the next time you reach this position you won't have to repeat this search.

Since you initially probe with a depth of DEPTH, you _must_ store draft=DEPTH so that you avoid the work the next time around... Any other value of draft will hurt. Yes, storing DEPTH-R will make your program faster, because you will be trusting that result when you should not, since you will be probing with a value of "DEPTH" and that will most certainly be >= DEPTH-R more frequently than it will be >= DEPTH. but it would be wrong in most of those cases.
User avatar
Greg Strong
Posts: 388
Joined: Sun Dec 21, 2008 6:57 pm
Location: Washington, DC

Re: Null-move pruning and the hash table

Post by Greg Strong »

bob wrote:Since you initially probe with a depth of DEPTH, you _must_ store draft=DEPTH so that you avoid the work the next time around... Any other value of draft will hurt. Yes, storing DEPTH-R will make your program faster, because you will be trusting that result when you should not, since you will be probing with a value of "DEPTH" and that will most certainly be >= DEPTH-R more frequently than it will be >= DEPTH. but it would be wrong in most of those cases.
Unless I'm missing something, (which I probably am), you've got it backwards. He's storing the hash table entry with a lower draft that he should, and this he wouldn't be "trusting that result when you should not", he would be distrusting it when he should.

If the depth is 9, and he stores the bound with depth - R, (let's say 6,) then when the position is encountered again and the depth is 7, it will be untrusted and the null move played again ...

I suppose there is a chance that this is actually good because playing it again will update the hash table entries which might help in the long run (although it seems unlikely.)