That is the same thing. The position after two null moves in a row is the same, with the same side to move, as the original position. And the search was reduced because of the two null moves. So you search the original position with reduced depth, which is the verification search, and when it fails low, the second null move returns a fail high, and the first a fail low, and you search the same position again, but now at full depth. Which you would normally do when the verification search fails low.Mincho Georgiev wrote:I've only tried so far the verified null move pruning, where after cut-off, the search continues with reduced depth.
Null-move pruning and the hash table
Moderators: hgm, Rebel, chrisw
-
- Posts: 27795
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Null-move pruning and the hash table
-
- Posts: 454
- Joined: Sat Apr 04, 2009 6:44 pm
- Location: Bulgaria
Re: Null-move pruning and the hash table
Hm, never thought about it, i should consider that more deeply. Thanks for pointing this out.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
I did some testing and got mixed results, so I am making a cluster run that will have the best 23.2 version so far, then that version without the code that disallows two nulls in a row, and then using the same modified code, allowing nulls anywhere including pawn-only endings...hgm wrote:Why is that? I don't pay any attention to the number of null moves in a row in micro-Max, and found never any adverse effects of it. I would say at the very worst it could be a (small) waste of time. Two null moves in a row cannot both fail high. For the first to fail high, it is necessary that the second fails low, and then it would be like you had not searched it at all, as a low-failing null move does not affect the search or score. You only prune when it fails high. And if the second null move fails high, making the first fail low, where it otherwise would have failed high, it merely means that you are in zugzwang, and that the search after the two null moves acted as a verification search that detected that (despite of its high reduction), so that you don't trust the null-move, and do a normal search in stead.Mincho Georgiev wrote:Recursive null move pruning is widely spread scheme. This means, that
more than one null moves are allowed in one branch (line) of the search.
However, this is absolutely different from making 2/3 null moves in a row.
The last brings a huge risk of tactical blunders and it may cause the search to assume that position is good and cuts off, when actually could be very bad instead.
More when that finishes.
-
- Posts: 454
- Joined: Sat Apr 04, 2009 6:44 pm
- Location: Bulgaria
Re: Null-move pruning and the hash table
Nice, I've just started something similar (rows vs recursive_only) 2h ago, and I'm very interested to see the results of yours.
-
- Posts: 155
- Joined: Mon Feb 15, 2010 9:33 am
- Location: New Zealand
Re: Null-move pruning and the hash table
Nice pun :-)michiguel wrote:Spandrel is a gould name!micron wrote:I just implemented null-move pruning in Spandrel.
Miguel
PS: architect or evolutionary biologist?
Bridge design enthusiast. The wikipedia article http://en.wikipedia.org/wiki/Spandrel_(biology) describes Gould & Lewontin's borrowing of 'spandrel' for biology.
Robert P.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
Going way slower than I thought. Student hung up all but one node on the cluster, his distributed program apparently deadlocked so that the job will never finish. Trying to get it cleared out right now in fact.Mincho Georgiev wrote:Nice, I've just started something similar (rows vs recursive_only) 2h ago, and I'm very interested to see the results of yours.
-
- Posts: 454
- Joined: Sat Apr 04, 2009 6:44 pm
- Location: Bulgaria
Re: Null-move pruning and the hash table
Oh, I hope you clear it. I'll post mine after 500-1000 games.bob wrote:Going way slower than I thought. Student hung up all but one node on the cluster, his distributed program apparently deadlocked so that the job will never finish. Trying to get it cleared out right now in fact.Mincho Georgiev wrote:Nice, I've just started something similar (rows vs recursive_only) 2h ago, and I'm very interested to see the results of yours.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
Making progress now. Early results:Mincho Georgiev wrote:Oh, I hope you clear it. I'll post mine after 500-1000 games.bob wrote:Going way slower than I thought. Student hung up all but one node on the cluster, his distributed program apparently deadlocked so that the job will never finish. Trying to get it cleared out right now in fact.Mincho Georgiev wrote:Nice, I've just started something similar (rows vs recursive_only) 2h ago, and I'm very interested to see the results of yours.
Code: Select all
Crafty-23.2R01-1 2658 3 3 30000 61% 2577 23%
Crafty-23.2R07 2655 4 4 14843 60% 2577 23%
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
Search deeper. Those should be in the millions, not tens of thousands. The deeper you go, the more hash hits you will see (particularly when draft < depth).micron wrote:[1] Ah, thank you, that seems to predict reliably that null-move won't cutoff and therefore should not be attempted. In my program, though, it turns out to save a negligible number of attempts.Teemu Pudas wrote:[1] if (tt_depth >= depth - R - 1 && tt_score < beta && tt_type & UPPER) skip_null(); // If a plain reduced search without nullmove won't produce a cutoff...
[2] You have a bug.
Without skip_null: 45366 null-moves 36240 cutoffs (79%)
With skip_null: 45304 null moves 36240 cutoffs (79%)
[2] Good, if I can fix the bug in the TT save, null-move pruning should produce even better results...
Robert P.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
Ok, The jury is back, and the verdict is "don't do this." It is only worth about 2-3 Elo to not do back-to-back nulls, and the reason seems to be efficiency. After the cluster test showed the idea to be slightly worse, I ran some test positions with some additional statistics gathered. A null after a null does not fail high very often. And when it doesn't, it is a total waste of time. If you disallow a null right after a null, you gain just a bit of zugzwang protection, but you lose a little more in efficiency due to the extra null-move searches that fail low and don't refute the previous null-move.
I'm sticking with the current approach. I will test both of these versions by adding verification, for completeness, but it seems like a very slightly worse way of doing null-move searches, although it is not very significant overall. But as I have often said, give away a couple of Elo here, a couple there, and soon you are working on a patzer.
R08 is the version with double-nulls allowed, and nulls can be done even in pawn-only endings. It always looks a bit better until the 15-20,000 game point where it has dropped back to about the same.
Note that R01 does win one more game out of every 100 than R07 (the double-null version).. I'll post the complete results in a couple of hours when I get out of class... Test should have finished by then.
I'm sticking with the current approach. I will test both of these versions by adding verification, for completeness, but it seems like a very slightly worse way of doing null-move searches, although it is not very significant overall. But as I have often said, give away a couple of Elo here, a couple there, and soon you are working on a patzer.
Code: Select all
Crafty-23.2R08 2653 6 6 6974 61% 2569 23%
Crafty-23.2R01-1 2651 3 3 30000 61% 2570 23%
Crafty-23.2R07-1 2648 3 3 30000 60% 2570 23%
Note that R01 does win one more game out of every 100 than R07 (the double-null version).. I'll post the complete results in a couple of hours when I get out of class... Test should have finished by then.