thoughts on null-move reduction

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

ymatioun
Posts: 64
Joined: Fri Oct 18, 2013 11:40 pm
Location: New York

thoughts on null-move reduction

Post by ymatioun »

I am trying to understand why null-move reduction works so well. And possibly find a way to make it even better. Here are some thoughts and questions i have on this topic.

Null-move reduction works by searching a node with reduced depth and a margin. In this case margin is loss of turn. And reduction is ofter 3 plys (R=2 plys plus a null-move itself). Typical situation where it works is a CUT node where score is way above beta.

I ran some positions through my engine, and captured results of null-move as well as results of full search. If NM is restricted to expected CUT nodes, where eval>=beta (this is the typical scenario), then i observe the following:
1. node gets cut 97.4% of the time. That is, vast majority of the time expected CUT node with eval>=beta is actual CUT node.
2. NM produces true positive 68% of the time. That is, 68% of these nodes are true CUT nodes, and NM would cut them. This is the positive effect of NM, and here it only cuts approximately 2/3 nodes, leaving 1/3 nodes uncut.
3. NM produces false positives in 0.1% of nodes. That is, it cuts a node that would not be cut under full search. This percentage is pretty low.

But instead of NM we could do a regular search with reduced death and explicit margin. I tried searching with the same depth reduction (depth-3) and margin of 100 centipawns (that is, with a window of [be+99,be+100]). This margin generated the same 0.1% rate of false positives, but a lot higher rate of true positives: 83%. That is, it left approximately 1/6 notes uncut. This leads to substantial improvement in time-to-depth.

But when i ran this approach through actual game play, results were terrible; this approach clearly introduced errors that negatively impact search. Even though rate of errors (false positives) is the same as under NM reduction, these errors appear to have much larger negative impact on search.

My question is: why?
It appears that NM produces errors that don't matter, while regular search produces the same amount of errors, but they do matter.
NM does thus by effectively excluding positions where player's material is under immediate attack, and refuses to cut them. But could we do something about those positions too? Cutting some portion of them may significantly improve search efficiency.

Whatever thoughts or observations you have would be appreciated.
elcabesa
Posts: 855
Joined: Sun May 23, 2010 1:32 pm

Re: thoughts on null-move reduction

Post by elcabesa »

Am I wrong or I didn't read in your explanation nothing about the NULL MOVE?

null move work because when the eval is above beta we test that giving a free move to the opponent he could not hang us.

sorry if I misunderstood your explanation
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: thoughts on null-move reduction

Post by bob »

ymatioun wrote:I am trying to understand why null-move reduction works so well. And possibly find a way to make it even better. Here are some thoughts and questions i have on this topic.

Null-move reduction works by searching a node with reduced depth and a margin. In this case margin is loss of turn. And reduction is ofter 3 plys (R=2 plys plus a null-move itself). Typical situation where it works is a CUT node where score is way above beta.

I ran some positions through my engine, and captured results of null-move as well as results of full search. If NM is restricted to expected CUT nodes, where eval>=beta (this is the typical scenario), then i observe the following:
1. node gets cut 97.4% of the time. That is, vast majority of the time expected CUT node with eval>=beta is actual CUT node.
2. NM produces true positive 68% of the time. That is, 68% of these nodes are true CUT nodes, and NM would cut them. This is the positive effect of NM, and here it only cuts approximately 2/3 nodes, leaving 1/3 nodes uncut.
3. NM produces false positives in 0.1% of nodes. That is, it cuts a node that would not be cut under full search. This percentage is pretty low.

But instead of NM we could do a regular search with reduced death and explicit margin. I tried searching with the same depth reduction (depth-3) and margin of 100 centipawns (that is, with a window of [be+99,be+100]). This margin generated the same 0.1% rate of false positives, but a lot higher rate of true positives: 83%. That is, it left approximately 1/6 notes uncut. This leads to substantial improvement in time-to-depth.

But when i ran this approach through actual game play, results were terrible; this approach clearly introduced errors that negatively impact search. Even though rate of errors (false positives) is the same as under NM reduction, these errors appear to have much larger negative impact on search.

My question is: why?
It appears that NM produces errors that don't matter, while regular search produces the same amount of errors, but they do matter.
NM does thus by effectively excluding positions where player's material is under immediate attack, and refuses to cut them. But could we do something about those positions too? Cutting some portion of them may significantly improve search efficiency.

Whatever thoughts or observations you have would be appreciated.
There is a HUGE difference between a null-move search and a regular search, namely that you didn't move. I'd be willing to bet that if I could get away with forcing my opponent to play a null-move one time in a game (IE I get two consecutive moves) I would beat a GM way more than expected. What makes null-move so useful is that if you can let your opponent move twice in a row and you STILL come out ahead, your position is so good that a fail high on a shallower search is not going to hurt.

If you play normal chess, a fail high on a shallower search is going to introduce errors, since all you did is drop a couple of plies with nothing to compensate for it. Why do you do iteration N+1 after you complete iteration N? To get more accurate information. Whacking a couple of those plies distorts the answer seriously.

If you look at the tree, null-move REALLY helps after bad captures. Your opponent plays QxP, you play NxQ, and now whenever it is your turn at later plies, a null-move doesn't kill you because you are so far ahead. And the paths with that QxP, NxQ move pair get reduced and eliminated with much less effort.

It is all about the null-move. Think about a game where you can force your opponent to play a null move just once, but at a point of your choosing. He now can not leave himself open to unusual mate threats where you can play two consecutive moves. He can't leave ANY piece where it can be attacked by a lesser-valued piece, because you will attack it, make him play a null, and then take the piece and win material. That is an impossible advantage to overcome, unless you are already so far ahead that even given two consecutive moves he can't catch back up.
User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: thoughts on null-move reduction

Post by hgm »

ymatioun wrote:Whatever thoughts or observations you have would be appreciated.
What you describe is indeed suspect. A reduced real search should be much less reliable in detecting trouble as a null-move search of equal reduction. (I.e. it should cause far more false positives.) Because the real search can start with delaying moves that push the problem over the horizon, to a place where only the unreduced searched could have detected it. A null move doesn't delay anything; the opponent can start launching his attack immediately.

In addition there is the possibility that a true defence against the threat can be played pre-emptively, instead of the null move.
ymatioun
Posts: 64
Joined: Fri Oct 18, 2013 11:40 pm
Location: New York

Re: thoughts on null-move reduction

Post by ymatioun »

Thank you for all your comments.

What I am trying to get out of this is to make Null-Move reduction more effective. I think this can be done, if I find a way to vary its intensity.
Example: LMR. We can vary intensity of it by reducing more or by reducing less. Optimal result is achieved when marginal benefit of more reduction (lower time-to-depth) is equal to marginal cost (loss of precision).
From my testing it looks that NM reduction is currently underutilized: marginal benefit appears to be significantly higher than marginal cost. I got this conclusion from noticing that adding NM reduction to fixed depth search has very small negative impact on playing strength, while having a large (positive) impact on search time.
So, if we could do more NM reduction, I think playing strength would improve. And here increasing R does not really help: it does not increase NM-reduction cuts, it just reduces time spent on NM reductions, but that time is already pretty small.
Of course, to do more of it, we need some sort of lever. And that is what I am trying to find: if we turn implicit margin underlying Null-Move into some sort of explicit margin, we can vary it up of down.

I tried to use increase in beta as an explicit margin. But it does not approximate Null-Move margin at all. I think HGM has the right idea: NM margin comes largely from the fact that NM makes sure that player cannot start a long sequence of delaying moves, thereby pushing inevitable loss beyond the horizon.

I think we can generalize this as follows: NM excludes all moves by the player; normal search includes all of them. What if we only include some of them?
Example 1: exclude all player moves that can delay opponents attack: checks, promotions, bad and equal captures. Maybe even good captures, in material gain is not too high. This approach would be close to regular search.
Example 2: only allow player to move hanging pieces. This approach would be closer to full Null-Move (equal to NM if there are no hanging pieces).

I’ll try to play with these approaches and see if I get anything out of them.

Again, thanks for your comments.
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: thoughts on null-move reduction

Post by Michael Sherwin »

Just in the last few years a variable r factor has proven to add strength. A simple approach might be the equation, r = (depth + 14) / 5. This means that r starts off at the classic r = 3 and reaches r = 4 when remaining depth is at least 6. Raising r before depth remaining of 6 gives a substantial loss of strength. Results for the above formula vary widely depending on the computer opponent. This formula, r = (depth + 18) / 6 may be better. There are of course endless formulas. You can search for, " variable r factor" and you should find some post by Dann Corbit on this subject.

However, like you said yourself, the cost of null move is minimal. How much therefore can you expect to gain by making it just a little bit faster or even a lot faster? Not much I believe to be the answer.

An area to think about for search reduction is based on the idea that deeper searches push blunders farther away from the root when backing up scores. Therefore if the backed up score is from a PV that includes a blunder there are many opportunities to choose different moves before reaching the blunder position. I am wondering about the following idea. In going forward if it can be detected that white/black has, at a node, plenty of safe alternatives and then at a later node black/white also has plenty of safe alternatives then any deeper search may not be necessary. Another idea that involves a double null move says that if you can make two null moves (NM,OM,NM) and the score is not brought below beta then that position does not need to ever be searched any deeper. Maybe store that position in the TT with infinite draft(depth).
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: thoughts on null-move reduction

Post by Dann Corbit »

Michael Sherwin wrote:Just in the last few years a variable r factor has proven to add strength. A simple approach might be the equation, r = (depth + 14) / 5. This means that r starts off at the classic r = 3 and reaches r = 4 when remaining depth is at least 6. Raising r before depth remaining of 6 gives a substantial loss of strength. Results for the above formula vary widely depending on the computer opponent. This formula, r = (depth + 18) / 6 may be better. There are of course endless formulas. You can search for, " variable r factor" and you should find some post by Dann Corbit on this subject.

However, like you said yourself, the cost of null move is minimal. How much therefore can you expect to gain by making it just a little bit faster or even a lot faster? Not much I believe to be the answer.

An area to think about for search reduction is based on the idea that deeper searches push blunders farther away from the root when backing up scores. Therefore if the backed up score is from a PV that includes a blunder there are many opportunities to choose different moves before reaching the blunder position. I am wondering about the following idea. In going forward if it can be detected that white/black has, at a node, plenty of safe alternatives and then at a later node black/white also has plenty of safe alternatives then any deeper search may not be necessary. Another idea that involves a double null move says that if you can make two null moves (NM,OM,NM) and the score is not brought below beta then that position does not need to ever be searched any deeper. Maybe store that position in the TT with infinite draft(depth).
I suspect that the NM,OM,NM with infinite draft won't gain much.
Consider the savage reductions available today. With two successful null moves in the pv line, that pathway is almost done away with anyway (trivial search compared to its siblings).

The mobility idea is interesting. I think that null moves might also get a reduction that is a function of mobility. And if the mobility is one move, then there is no reduction (for obvious reasons).