Extended Null-Move Reductions

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Cardoso
Posts: 292
Joined: Thu Mar 16, 2006 6:39 pm

Extended Null-Move Reductions

Post by Cardoso » Fri Aug 20, 2010 2:27 pm

Hi,
has anyone tried this?

http://www.omiddavid.com/pubs/nmr.pdf

if so what were your results and conclusions?
best regards,
Alvaro

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob » Fri Aug 20, 2010 4:20 pm

Cardoso wrote:Hi,
has anyone tried this?

http://www.omiddavid.com/pubs/nmr.pdf

if so what were your results and conclusions?
best regards,
Alvaro
Based on the test results, it could be worthless. Once again someone uses test positions to decide whether an idea is good or bad, followed by self-play bet3ween two versions, one with the idea, one without. The right way to test is with a group of opponents that don't change. Given the test results, it is impossible to say whether this is good, bad or ugly.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob » Fri Aug 20, 2010 4:53 pm

bob wrote:
Cardoso wrote:Hi,
has anyone tried this?

http://www.omiddavid.com/pubs/nmr.pdf

if so what were your results and conclusions?
best regards,
Alvaro
Based on the test results, it could be worthless. Once again someone uses test positions to decide whether an idea is good or bad, followed by self-play bet3ween two versions, one with the idea, one without. The right way to test is with a group of opponents that don't change. Given the test results, it is impossible to say whether this is good, bad or ugly.
I'm going to run a quick test on the idea since it is trivial to implement. I did choose to make one change. After the reduction, if depth <= 0, he just returns eval(). I chose to do a q-search since I add on a layer of checks to make this a bit safer. I am trying both ways, with 3 different values for the reduction R. Will report results when they are available...

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: Extended Null-Move Reductions

Post by Milos » Fri Aug 20, 2010 4:53 pm

Cardoso wrote:Hi,
has anyone tried this?

http://www.omiddavid.com/pubs/nmr.pdf

if so what were your results and conclusions?
It's already implemented in Rybka, Ippo and Stockfish, ergo it works. It's an old news.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob » Fri Aug 20, 2010 5:03 pm

Milos wrote:
Cardoso wrote:Hi,
has anyone tried this?

http://www.omiddavid.com/pubs/nmr.pdf

if so what were your results and conclusions?
It's already implemented in Rybka, Ippo and Stockfish, ergo it works. It's an old news.
What, exactly, are you looking at? Stockfish 1.8 has a normal null-move search which searches to reduced depth, and returns if value >= beta (after doing the verification search which I do not use nor like). This algorithm does not do that. Instead, it falls into the normal search, but reduces the depth by 4 plies before doing so. Not anywhere near the same idea... this just reduces the depth if the null-move search fails high, but the normal search is then used, which is immune to zugzwang issues. But reducing every move by 4 plies is significant, and might be excessive. Testing is suggesting that.
Last edited by bob on Fri Aug 20, 2010 5:19 pm, edited 1 time in total.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob » Fri Aug 20, 2010 5:04 pm

For R=3 (first run) this appears to drop 24 Elo after 3,000 games. Will let it run to completion for accuracy. Also have R=4 and R=5 queued up since the paper uses R=4...

Code: Select all

 Crafty-23.4          2681    4    4 30000   66%  2557   22% 
 Crafty-23.3          2673    4    4 30000   65%  2557   22% 
 Crafty-23.4R01       2657   11   11  2730   64%  2551   23% 

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob » Fri Aug 20, 2010 5:26 pm

bob wrote:For R=3 (first run) this appears to drop 24 Elo after 3,000 games. Will let it run to completion for accuracy. Also have R=4 and R=5 queued up since the paper uses R=4...

Code: Select all

 Crafty-23.4          2681    4    4 30000   66%  2557   22% 
 Crafty-23.3          2673    4    4 30000   65%  2557   22% 
 Crafty-23.4R01       2657   11   11  2730   64%  2551   23% 
My original thought is being borne out by testing. R=4 is just a little worse than the R=3 test. I don't believe that the idea, as written up in the paper, works for Crafty. With no knowledge of what his program does internally, no way to determine of some of my pruning/reduction/etc ideas already provide the benefit he claims.

Others might try the idea is it takes about 30 seconds to implement. With my cluster it takes less than 30 minutes to reject it as well. :)

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: Extended Null-Move Reductions

Post by Milos » Fri Aug 20, 2010 5:31 pm

bob wrote:What, exactly, are you looking at? Stockfish 1.8 has a normal null-move search which searches to reduced depth, and returns if value >= beta (after doing the verification search which I do not use nor like). This algorithm does not do that. Instead, it falls into the normal search, but reduces the depth by 4 plies before doing so. Not anywhere near the same idea... this just reduces the depth if the null-move search fails high, but the normal search is then used, which is immune to zugzwang issues. But reducing every move by 4 plies is significant, and might be excessive. Testing is suggesting that.
I'm talking about adaptive NM reduction of 3 to 4 instead of Heinz's prehistoric 2 to 3 that some ppl are so stubbornly using.
I'm certainly not talking about idea of non-returning on fail-high.
That idea itself simply cannot work.
First null-move verification doesn't help elowise, second null-move verification has larger depth reduction than 4 in fail-high case here.
Conclusion, you will search larger tree for nothing and just loose elo. You don't need cluster to realize that.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob » Fri Aug 20, 2010 5:36 pm

Milos wrote:
bob wrote:What, exactly, are you looking at? Stockfish 1.8 has a normal null-move search which searches to reduced depth, and returns if value >= beta (after doing the verification search which I do not use nor like). This algorithm does not do that. Instead, it falls into the normal search, but reduces the depth by 4 plies before doing so. Not anywhere near the same idea... this just reduces the depth if the null-move search fails high, but the normal search is then used, which is immune to zugzwang issues. But reducing every move by 4 plies is significant, and might be excessive. Testing is suggesting that.
I'm talking about adaptive NM reduction of 3 to 4 instead of Heinz's prehistoric 2 to 3 that some ppl are so stubbornly using.
I'm certainly not talking about idea of non-returning on fail-high.
That idea itself simply cannot work.
First null-move verification doesn't help elowise, second null-move verification has larger depth reduction than 4 in fail-high case here.
Conclusion, you will search larger tree for nothing and just loose elo. You don't need cluster to realize that.
I stopped with the 2~3 a couple of years ago. But the basic idea of the paper is that rather than failing high if the null-move search returns a value >= beta, you just reduce the depth and do the normal search (reduction = 4 in this paper). That's a significant departure from null-move search and is somewhat akin to the verification search idea that I also consider to be bad... But when you said "stockfish, et. al. use this" that imples they use the basic idea presented in the paper, which nobody does that I am aware of (unless you count the somewhat similar verification search idea).

BTW if you look carefully, stockfish uses depth-5 for verification. This paper uses depth -R - 1 where R=4, so it uses R=5 as well. They should be fairly similar overall...

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: Extended Null-Move Reductions

Post by Milos » Fri Aug 20, 2010 5:45 pm

bob wrote:I stopped with the 2~3 a couple of years ago. But the basic idea of the paper is that rather than failing high if the null-move search returns a value >= beta, you just reduce the depth and do the normal search (reduction = 4 in this paper). That's a significant departure from null-move search and is somewhat akin to the verification search idea that I also consider to be bad... But when you said "stockfish, et. al. use this" that imples they use the basic idea presented in the paper, which nobody does that I am aware of (unless you count the somewhat similar verification search idea).
Ok my bad expression. I meant the idea of more aggressive null move reduction, not reducing depth and continuing search on fail-high which might be the main idea of the paper, but is certainly not the idea that brought them elo.

The thing with all null-move verification schemes is that zugzwangs are simply too rare in elo terms and good balance between verification search depth and reliable zugzwangs detection simply doesn't exist.
If you increase the tree just little (too high verification search depth reduction) you won't catch any zugzwangs, and will just search additionally for nothing. If you don't reduce verification search depth much, you will detect most of zugzwangs but you will pay much higher penalty elowise in bigger tree searching. Optimum doesn't exists. Or, more precisely optimum exists and is achieved for no verification search :).

Post Reply