Extended Null-Move Reductions

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Extended Null-Move Reductions

Post by michiguel »

Ralph Stoesser wrote:
Gerd Isenberg wrote: But isn't NM fails high but verification fails low the definition of zugzwang?
Yes, these are zugzwang positions by definition, within the (reduced) search horizon.
Maybe a few position examples from Marco could clear up the mystery..?
Not necessarily. Nullmove search, particularly if it is recursive, will delay a very deep threat. Depending on the implementation, If the verification search is not recursive or alter the order of nullmoves allowed (for instance, since the first move is not nullmove, now the second could be etc.) it is possible that a shorter verification search may not be able to put the threat out the shorter horizon.

Miguel
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Many thanks for the testing (NT) another result.

Post by bob »

jwes wrote:
bob wrote:
jwes wrote:
bob wrote:
bob wrote:one more bit of info. Using this approach, and allowing null-move search with pawn-only endings is worth about -100 Elo, so this is not much of a verification good news test.
Just realized I reported that wrong. Using R=4 and doing null-move in KP endings is a -11 Elo deal in Crafty. Going to R=5 drops it to -100, again doing this in KP endings (my normal null-move only triggers if the side on move has a single piece or more...)
An idea I had was to allow null move if the side on move has no pieces if it is significantly down in material (I would guess more than a piece). The idea is that if you're lost anyway, it probably doesn't take zugzwang to beat you.
Once the current batch of tests are done, I'll give that a check. In pawn-only endgames or not???
Since you already do null moves when the side on move has a piece, try also doing null moves when the side on move has no pieces and is significantly down in material.
OK, two more questions:

(1) do original null-move or this new idea?

(2) what is "significantly down in material"? piece? rook? queen?

Other tests have concluded. The best result I produced was a -10 Elo loss from the standard null-move search. I conclude that this simply does not work for Crafty. Not convinced it will work for anyone, but am certain it doesn't work for me, period...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob »

Pablo Vazquez wrote:
bob wrote:For R=3 (first run) this appears to drop 24 Elo after 3,000 games. Will let it run to completion for accuracy. Also have R=4 and R=5 queued up since the paper uses R=4...

Code: Select all

 Crafty-23.4          2681    4    4 30000   66%  2557   22% 
 Crafty-23.3          2673    4    4 30000   65%  2557   22% 
 Crafty-23.4R01       2657   11   11  2730   64%  2551   23% 
One question about your implementation. When you finish the search and store in the transposition table, do you use the original depth or the reduced one?
Reduced one, if you mean the reduced-depth search that follows the null-move fail high. Don't see how you could store anything else and not get gross search inconsistencies...

All I did was reduce the actual "depth" variable at the null-move test code in search.c... reduced-depth is used below that point for everything.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob »

Ralph Stoesser wrote:
Gerd Isenberg wrote: But isn't NM fails high but verification fails low the definition of zugzwang?
Yes, these are zugzwang positions by definition, within the (reduced) search horizon.
Maybe a few position examples from Marco could clear up the mystery..?
There is an alternative. NM also reduces the _depth_ which can hide various sorts of deep tactics or deep positional issues. Every time you reduce the depth, you take yourself closer to the horizon. Without actually making the correct number of moves and counter-moves.

Once you think about it, there are true zugzwang positions where if you move, the roof falls in, if you stand pat and let your opponent move, then it doesn't. Classic case is white pawn at e4, white king at e5, black king at e7. Black to move loses. White to move draws. The other type of position is simply a tactic that requires N plies to resolve, but below a null-move (or even in the verification search) the reduced depth is not enough to see the problem, while the original depth was enough. The classic case of the latter used to be a position where (say) black had played g6, white has a pawn on f6 and a queen on h6 with black king at g8. Mate in 1 with Qg7#. But the null-move could, instead, cause white to drop right into the q-search where it would not see Qg7 if it only does captures.
Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Extended Null-Move Reductions

Post by Uri Blass »

Ralph Stoesser wrote:
Gerd Isenberg wrote: But isn't NM fails high but verification fails low the definition of zugzwang?
Yes, these are zugzwang positions by definition, within the (reduced) search horizon.
Maybe a few position examples from Marco could clear up the mystery..?
As a chess player the definition of zugzwang is different for me and it is not dependent on the search of chess programs.

A zugzwang is a position when
not moving is better than moving.

The result of the search of chess program is clearly irrelevant for the definition.

The program may score not moving at reduced depth as better because it does not search deep enough but it does not mean that the position is zugzwang.

Even without that problem the program may score not moving as better than moving because of not seeing deep enough and it may be possible that in some position every move reduce the positional score if you search to depth 1 but it is not a zugzwang.

I think that the right solution is to detect positions as "probably zugzwang" only when the computer detects losing at least a full pawn(or maybe at least 1/2 pawn but something significant) by moving relative to not moving when you search to the same reduced depth and hopefully in this case zugzwang detection is going to detect real zugzwangs in most cases of positive detection.

Hopefully if the search is reduced enough(so Crafty is not more than 1% slower in nodes per second by correct zugzwang detection) Bob is going to find a small improvement in playing strength of less than 5 elo by correct zugzwang detection.

Uri
Pablo Vazquez
Posts: 154
Joined: Thu May 31, 2007 9:05 pm
Location: Madrid, Spain

Re: Extended Null-Move Reductions

Post by Pablo Vazquez »

bob wrote:
Pablo Vazquez wrote:
bob wrote:For R=3 (first run) this appears to drop 24 Elo after 3,000 games. Will let it run to completion for accuracy. Also have R=4 and R=5 queued up since the paper uses R=4...

Code: Select all

 Crafty-23.4          2681    4    4 30000   66%  2557   22% 
 Crafty-23.3          2673    4    4 30000   65%  2557   22% 
 Crafty-23.4R01       2657   11   11  2730   64%  2551   23% 
One question about your implementation. When you finish the search and store in the transposition table, do you use the original depth or the reduced one?
Reduced one, if you mean the reduced-depth search that follows the null-move fail high. Don't see how you could store anything else and not get gross search inconsistencies...

All I did was reduce the actual "depth" variable at the null-move test code in search.c... reduced-depth is used below that point for everything.
The reason for using original depth would be the same as with normal null move: to be able to use the result if you visit the node again with the same depth. Anyway, I'm not sure the idea is sound. After the null move has failed high, the reduced search acts like a verification search. If it fails high, you return like in verified null move but if it fails low (a zugzwang position), it leaves you with a reduced depth search, whereas in verified null move you would continue searching at normal depth.
Ralph Stoesser
Posts: 408
Joined: Sat Mar 06, 2010 9:28 am

Re: Extended Null-Move Reductions

Post by Ralph Stoesser »

Uri Blass wrote:
Ralph Stoesser wrote:
Gerd Isenberg wrote: But isn't NM fails high but verification fails low the definition of zugzwang?
Yes, these are zugzwang positions by definition, within the (reduced) search horizon.
Maybe a few position examples from Marco could clear up the mystery..?
As a chess player the definition of zugzwang is different for me and it is not dependent on the search of chess programs.

A zugzwang is a position when
not moving is better than moving.

The result of the search of chess program is clearly irrelevant for the definition.

[..]
There are special cases where humans detect zugzwang by chess knowledge, but apart from those known patterns humans also have to search to be able to conclude zugzwang. Since computers are in general much better in finding the best moves than every human player, they should also be better in detecting the null move as the best move, even at reduced search depth.

Your zugzwang defintion tells nothing about how much better the null move should be. Even if it's only 1/100 of a pawn better to not move at all, it's still zugzwang by definition. Note that the zugzwang definition also tells nothing about who is about to win the game. Beeing in zugzwang does not necessary mean being in serious trouble.
Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Extended Null-Move Reductions

Post by Uri Blass »

Ralph Stoesser wrote:
Uri Blass wrote:
Ralph Stoesser wrote:
Gerd Isenberg wrote: But isn't NM fails high but verification fails low the definition of zugzwang?
Yes, these are zugzwang positions by definition, within the (reduced) search horizon.
Maybe a few position examples from Marco could clear up the mystery..?
As a chess player the definition of zugzwang is different for me and it is not dependent on the search of chess programs.

A zugzwang is a position when
not moving is better than moving.

The result of the search of chess program is clearly irrelevant for the definition.

[..]
There are special cases where humans detect zugzwang by chess knowledge, but apart from those known patterns humans also have to search to be able to conclude zugzwang. Since computers are in general much better in finding the best moves than every human player, they should also be better in detecting the null move as the best move, even at reduced search depth.

Your zugzwang defintion tells nothing about how much better the null move should be. Even if it's only 1/100 of a pawn better to not move at all, it's still zugzwang by definition. Note that the zugzwang definition also tells nothing about who is about to win the game. Beeing in zugzwang does not necessary mean being in serious trouble.
The problem is that you cannot trust 1/100 of a pawn evaluation difference and if the program's evaluation is 0.01 better for not moving then the evaluation may be wrong and you can see something different when you search deeper.

I think that it is not important to detect zugzwangs of 0.01 pawns and practically programs that only detect big zugzwangs can play perfectly in case that the search is deep enough because mistake that is losing because of zugzwang is going to be a big zugzwang after deep search.
jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: Many thanks for the testing (NT) another result.

Post by jwes »

bob wrote:
jwes wrote:
bob wrote:
jwes wrote:
bob wrote:
bob wrote:one more bit of info. Using this approach, and allowing null-move search with pawn-only endings is worth about -100 Elo, so this is not much of a verification good news test.
Just realized I reported that wrong. Using R=4 and doing null-move in KP endings is a -11 Elo deal in Crafty. Going to R=5 drops it to -100, again doing this in KP endings (my normal null-move only triggers if the side on move has a single piece or more...)
An idea I had was to allow null move if the side on move has no pieces if it is significantly down in material (I would guess more than a piece). The idea is that if you're lost anyway, it probably doesn't take zugzwang to beat you.
Once the current batch of tests are done, I'll give that a check. In pawn-only endgames or not???
Since you already do null moves when the side on move has a piece, try also doing null moves when the side on move has no pieces and is significantly down in material.
OK, two more questions:

(1) do original null-move or this new idea?

(2) what is "significantly down in material"? piece? rook? queen?
1. Do your standard null move and in addition do a standard null move when the side to move has no pieces and is down significantly in material.
2. I am guessing significantly means more than a piece, but you could try it with a larger number to see if it works for you at all and tune it later.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Extended Null-Move Reductions

Post by bob »

Pablo Vazquez wrote:
bob wrote:
Pablo Vazquez wrote:
bob wrote:For R=3 (first run) this appears to drop 24 Elo after 3,000 games. Will let it run to completion for accuracy. Also have R=4 and R=5 queued up since the paper uses R=4...

Code: Select all

 Crafty-23.4          2681    4    4 30000   66%  2557   22% 
 Crafty-23.3          2673    4    4 30000   65%  2557   22% 
 Crafty-23.4R01       2657   11   11  2730   64%  2551   23% 
One question about your implementation. When you finish the search and store in the transposition table, do you use the original depth or the reduced one?
Reduced one, if you mean the reduced-depth search that follows the null-move fail high. Don't see how you could store anything else and not get gross search inconsistencies...

All I did was reduce the actual "depth" variable at the null-move test code in search.c... reduced-depth is used below that point for everything.
The reason for using original depth would be the same as with normal null move: to be able to use the result if you visit the node again with the same depth. Anyway, I'm not sure the idea is sound. After the null move has failed high, the reduced search acts like a verification search. If it fails high, you return like in verified null move but if it fails low (a zugzwang position), it leaves you with a reduced depth search, whereas in verified null move you would continue searching at normal depth.
The problem is the "depth" value which gets passed thru the recursive search calls. It would be incredibly bad to use "depth" to store, but pass depth-4 to the next ply. The result you are storing is not a "depth" search, but a depth-4 search. If you store depth-4, you should get a hit the next around at the same depth...