Calculating R value for Null Move

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
AndrewGrant
Posts: 472
Joined: Tue Apr 19, 2016 4:08 am
Location: U.S.A
Full name: Andrew Grant
Contact:

Calculating R value for Null Move

Post by AndrewGrant » Thu Jun 23, 2016 6:16 am

I've recently gained access to some additional computing power, so I have been tinkering with changes which need reasonable time controls to get good results from. I've been working with my Null Move value. Up until now, my formula has always been very simple, current depth-R-1 (R=3).

It seems reasonable to factor in the current depth and how great the gap between beta and evalBoard() is.

I took a look at Crafty, Fruit, and Stockfish.

Crafty keeps it simple with R = (3 + depth / someInteger)

Fruit even more simple, with constant R=3

As expected, Stockfish has strange numbers which managed to do best in their testing, factoring in depth and eval diff.


What have you all seen within your own engines? Are there factors to include other than depth and beta-evalBoard() value?

I'm sure Robert Hyatt could write a book on various methods he has tried over the years

Ferdy
Posts: 4079
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: Calculating R value for Null Move

Post by Ferdy » Thu Jun 23, 2016 6:52 am

Other factors can be mobility of pieces of the side to move, the number of pieces of the side to move, the last move of opponent if it was a capture or a promote move, the iteration depth and the king safety score of the side to move.

Henk
Posts: 5799
Joined: Mon May 27, 2013 8:31 am

Re: Calculating R value for Null Move

Post by Henk » Thu Jun 23, 2016 10:51 am

Null move searches also fills transposition table.

This question is difficult to answer maybe only use a minimax algorithm without any tables but only using null move reduction.

If the best method found to compute R is understandable then go to a bit more complex algorithm etc.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Calculating R value for Null Move

Post by bob » Thu Jun 23, 2016 4:54 pm

AndrewGrant wrote:I've recently gained access to some additional computing power, so I have been tinkering with changes which need reasonable time controls to get good results from. I've been working with my Null Move value. Up until now, my formula has always been very simple, current depth-R-1 (R=3).

It seems reasonable to factor in the current depth and how great the gap between beta and evalBoard() is.

I took a look at Crafty, Fruit, and Stockfish.

Crafty keeps it simple with R = (3 + depth / someInteger)

Fruit even more simple, with constant R=3

As expected, Stockfish has strange numbers which managed to do best in their testing, factoring in depth and eval diff.


What have you all seen within your own engines? Are there factors to include other than depth and beta-evalBoard() value?

I'm sure Robert Hyatt could write a book on various methods he has tried over the years
My current "formula" was derived from millions of test games on a cluster. But this, like most everything else in computer chess, is really sort of coupled to the rest of the engine. IE change LMR and the null move formula would likely change (to keep it optimal). Change search extensions, same thing. I think the idea of scaling it according to remaining depth makes a lot of sense, since "the null-move observation" is pretty straightforward. Some have even suggested that rather than a formula, that a null-move search be to a fixed depth, period, since the idea is that if my position is so good that not playing a move at all will still fail high, and that is pretty easy to refute. Most of the null-move searches simply recognize that one side is so far ahead in material that the other side can't do anything about it, period. A shallow search will show that just about as well as a deep search...

I have this on my list to test at some point, now that we see such extreme depths in the typical search today.

User avatar
hgm
Posts: 23619
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Calculating R value for Null Move

Post by hgm » Thu Jun 23, 2016 7:11 pm

Fixed-depth null-move search would mean there are some threats you would never see, no matter how deep you made the engine search.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Calculating R value for Null Move

Post by bob » Thu Jun 23, 2016 9:03 pm

hgm wrote:Fixed-depth null-move search would mean there are some threats you would never see, no matter how deep you made the engine search.
How so. If you set depth=8 (or whatever) each successive iteration would see one ply deeper at that same position?

konsolas
Posts: 182
Joined: Sun Jun 12, 2016 3:44 pm
Location: London
Full name: Vincent
Contact:

Re: Calculating R value for Null Move

Post by konsolas » Thu Jun 23, 2016 9:17 pm

bob wrote:
hgm wrote:Fixed-depth null-move search would mean there are some threats you would never see, no matter how deep you made the engine search.
How so. If you set depth=8 (or whatever) each successive iteration would see one ply deeper at that same position?
There is a difference between fixed depth and fixed reduction. Fixed reduction is fine, because as iterative deepening ... gets deeper, the same null move cutoff would be searched to a greater depth. With fixed depth, the same cutoff would always be searched to the same depth, and the engine would never see a potential threat.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Calculating R value for Null Move

Post by bob » Thu Jun 23, 2016 10:08 pm

konsolas wrote:
bob wrote:
hgm wrote:Fixed-depth null-move search would mean there are some threats you would never see, no matter how deep you made the engine search.
How so. If you set depth=8 (or whatever) each successive iteration would see one ply deeper at that same position?
There is a difference between fixed depth and fixed reduction. Fixed reduction is fine, because as iterative deepening ... gets deeper, the same null move cutoff would be searched to a greater depth. With fixed depth, the same cutoff would always be searched to the same depth, and the engine would never see a potential threat.
I see what he was talking about now... Whether that is bad with 30+ ply searches is another issue however.

Post Reply