I am trying to understand why nullmove reduction works so well. And possibly find a way to make it even better. Here are some thoughts and questions i have on this topic.
Nullmove reduction works by searching a node with reduced depth and a margin. In this case margin is loss of turn. And reduction is ofter 3 plys (R=2 plys plus a nullmove itself). Typical situation where it works is a CUT node where score is way above beta.
I ran some positions through my engine, and captured results of nullmove as well as results of full search. If NM is restricted to expected CUT nodes, where eval>=beta (this is the typical scenario), then i observe the following:
1. node gets cut 97.4% of the time. That is, vast majority of the time expected CUT node with eval>=beta is actual CUT node.
2. NM produces true positive 68% of the time. That is, 68% of these nodes are true CUT nodes, and NM would cut them. This is the positive effect of NM, and here it only cuts approximately 2/3 nodes, leaving 1/3 nodes uncut.
3. NM produces false positives in 0.1% of nodes. That is, it cuts a node that would not be cut under full search. This percentage is pretty low.
But instead of NM we could do a regular search with reduced death and explicit margin. I tried searching with the same depth reduction (depth3) and margin of 100 centipawns (that is, with a window of [be+99,be+100]). This margin generated the same 0.1% rate of false positives, but a lot higher rate of true positives: 83%. That is, it left approximately 1/6 notes uncut. This leads to substantial improvement in timetodepth.
But when i ran this approach through actual game play, results were terrible; this approach clearly introduced errors that negatively impact search. Even though rate of errors (false positives) is the same as under NM reduction, these errors appear to have much larger negative impact on search.
My question is: why?
It appears that NM produces errors that don't matter, while regular search produces the same amount of errors, but they do matter.
NM does thus by effectively excluding positions where player's material is under immediate attack, and refuses to cut them. But could we do something about those positions too? Cutting some portion of them may significantly improve search efficiency.
Whatever thoughts or observations you have would be appreciated.
thoughts on nullmove reduction
Moderators: Harvey Williamson, bob, hgm
Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Re: thoughts on nullmove reduction
Am I wrong or I didn't read in your explanation nothing about the NULL MOVE?
null move work because when the eval is above beta we test that giving a free move to the opponent he could not hang us.
sorry if I misunderstood your explanation
null move work because when the eval is above beta we test that giving a free move to the opponent he could not hang us.
sorry if I misunderstood your explanation
Re: thoughts on nullmove reduction
There is a HUGE difference between a nullmove search and a regular search, namely that you didn't move. I'd be willing to bet that if I could get away with forcing my opponent to play a nullmove one time in a game (IE I get two consecutive moves) I would beat a GM way more than expected. What makes nullmove so useful is that if you can let your opponent move twice in a row and you STILL come out ahead, your position is so good that a fail high on a shallower search is not going to hurt.ymatioun wrote:I am trying to understand why nullmove reduction works so well. And possibly find a way to make it even better. Here are some thoughts and questions i have on this topic.
Nullmove reduction works by searching a node with reduced depth and a margin. In this case margin is loss of turn. And reduction is ofter 3 plys (R=2 plys plus a nullmove itself). Typical situation where it works is a CUT node where score is way above beta.
I ran some positions through my engine, and captured results of nullmove as well as results of full search. If NM is restricted to expected CUT nodes, where eval>=beta (this is the typical scenario), then i observe the following:
1. node gets cut 97.4% of the time. That is, vast majority of the time expected CUT node with eval>=beta is actual CUT node.
2. NM produces true positive 68% of the time. That is, 68% of these nodes are true CUT nodes, and NM would cut them. This is the positive effect of NM, and here it only cuts approximately 2/3 nodes, leaving 1/3 nodes uncut.
3. NM produces false positives in 0.1% of nodes. That is, it cuts a node that would not be cut under full search. This percentage is pretty low.
But instead of NM we could do a regular search with reduced death and explicit margin. I tried searching with the same depth reduction (depth3) and margin of 100 centipawns (that is, with a window of [be+99,be+100]). This margin generated the same 0.1% rate of false positives, but a lot higher rate of true positives: 83%. That is, it left approximately 1/6 notes uncut. This leads to substantial improvement in timetodepth.
But when i ran this approach through actual game play, results were terrible; this approach clearly introduced errors that negatively impact search. Even though rate of errors (false positives) is the same as under NM reduction, these errors appear to have much larger negative impact on search.
My question is: why?
It appears that NM produces errors that don't matter, while regular search produces the same amount of errors, but they do matter.
NM does thus by effectively excluding positions where player's material is under immediate attack, and refuses to cut them. But could we do something about those positions too? Cutting some portion of them may significantly improve search efficiency.
Whatever thoughts or observations you have would be appreciated.
If you play normal chess, a fail high on a shallower search is going to introduce errors, since all you did is drop a couple of plies with nothing to compensate for it. Why do you do iteration N+1 after you complete iteration N? To get more accurate information. Whacking a couple of those plies distorts the answer seriously.
If you look at the tree, nullmove REALLY helps after bad captures. Your opponent plays QxP, you play NxQ, and now whenever it is your turn at later plies, a nullmove doesn't kill you because you are so far ahead. And the paths with that QxP, NxQ move pair get reduced and eliminated with much less effort.
It is all about the nullmove. Think about a game where you can force your opponent to play a null move just once, but at a point of your choosing. He now can not leave himself open to unusual mate threats where you can play two consecutive moves. He can't leave ANY piece where it can be attacked by a lesservalued piece, because you will attack it, make him play a null, and then take the piece and win material. That is an impossible advantage to overcome, unless you are already so far ahead that even given two consecutive moves he can't catch back up.
 hgm
 Posts: 25009
 Joined: Fri Mar 10, 2006 9:06 am
 Location: Amsterdam
 Full name: H G Muller
 Contact:
Re: thoughts on nullmove reduction
What you describe is indeed suspect. A reduced real search should be much less reliable in detecting trouble as a nullmove search of equal reduction. (I.e. it should cause far more false positives.) Because the real search can start with delaying moves that push the problem over the horizon, to a place where only the unreduced searched could have detected it. A null move doesn't delay anything; the opponent can start launching his attack immediately.ymatioun wrote:Whatever thoughts or observations you have would be appreciated.
In addition there is the possibility that a true defence against the threat can be played preemptively, instead of the null move.
Re: thoughts on nullmove reduction
Thank you for all your comments.
What I am trying to get out of this is to make NullMove reduction more effective. I think this can be done, if I find a way to vary its intensity.
Example: LMR. We can vary intensity of it by reducing more or by reducing less. Optimal result is achieved when marginal benefit of more reduction (lower timetodepth) is equal to marginal cost (loss of precision).
From my testing it looks that NM reduction is currently underutilized: marginal benefit appears to be significantly higher than marginal cost. I got this conclusion from noticing that adding NM reduction to fixed depth search has very small negative impact on playing strength, while having a large (positive) impact on search time.
So, if we could do more NM reduction, I think playing strength would improve. And here increasing R does not really help: it does not increase NMreduction cuts, it just reduces time spent on NM reductions, but that time is already pretty small.
Of course, to do more of it, we need some sort of lever. And that is what I am trying to find: if we turn implicit margin underlying NullMove into some sort of explicit margin, we can vary it up of down.
I tried to use increase in beta as an explicit margin. But it does not approximate NullMove margin at all. I think HGM has the right idea: NM margin comes largely from the fact that NM makes sure that player cannot start a long sequence of delaying moves, thereby pushing inevitable loss beyond the horizon.
I think we can generalize this as follows: NM excludes all moves by the player; normal search includes all of them. What if we only include some of them?
Example 1: exclude all player moves that can delay opponents attack: checks, promotions, bad and equal captures. Maybe even good captures, in material gain is not too high. This approach would be close to regular search.
Example 2: only allow player to move hanging pieces. This approach would be closer to full NullMove (equal to NM if there are no hanging pieces).
I’ll try to play with these approaches and see if I get anything out of them.
Again, thanks for your comments.
What I am trying to get out of this is to make NullMove reduction more effective. I think this can be done, if I find a way to vary its intensity.
Example: LMR. We can vary intensity of it by reducing more or by reducing less. Optimal result is achieved when marginal benefit of more reduction (lower timetodepth) is equal to marginal cost (loss of precision).
From my testing it looks that NM reduction is currently underutilized: marginal benefit appears to be significantly higher than marginal cost. I got this conclusion from noticing that adding NM reduction to fixed depth search has very small negative impact on playing strength, while having a large (positive) impact on search time.
So, if we could do more NM reduction, I think playing strength would improve. And here increasing R does not really help: it does not increase NMreduction cuts, it just reduces time spent on NM reductions, but that time is already pretty small.
Of course, to do more of it, we need some sort of lever. And that is what I am trying to find: if we turn implicit margin underlying NullMove into some sort of explicit margin, we can vary it up of down.
I tried to use increase in beta as an explicit margin. But it does not approximate NullMove margin at all. I think HGM has the right idea: NM margin comes largely from the fact that NM makes sure that player cannot start a long sequence of delaying moves, thereby pushing inevitable loss beyond the horizon.
I think we can generalize this as follows: NM excludes all moves by the player; normal search includes all of them. What if we only include some of them?
Example 1: exclude all player moves that can delay opponents attack: checks, promotions, bad and equal captures. Maybe even good captures, in material gain is not too high. This approach would be close to regular search.
Example 2: only allow player to move hanging pieces. This approach would be closer to full NullMove (equal to NM if there are no hanging pieces).
I’ll try to play with these approaches and see if I get anything out of them.
Again, thanks for your comments.

 Posts: 3196
 Joined: Fri May 26, 2006 1:00 am
 Location: WY, USA
 Full name: Michael Sherwin
Re: thoughts on nullmove reduction
Just in the last few years a variable r factor has proven to add strength. A simple approach might be the equation, r = (depth + 14) / 5. This means that r starts off at the classic r = 3 and reaches r = 4 when remaining depth is at least 6. Raising r before depth remaining of 6 gives a substantial loss of strength. Results for the above formula vary widely depending on the computer opponent. This formula, r = (depth + 18) / 6 may be better. There are of course endless formulas. You can search for, " variable r factor" and you should find some post by Dann Corbit on this subject.
However, like you said yourself, the cost of null move is minimal. How much therefore can you expect to gain by making it just a little bit faster or even a lot faster? Not much I believe to be the answer.
An area to think about for search reduction is based on the idea that deeper searches push blunders farther away from the root when backing up scores. Therefore if the backed up score is from a PV that includes a blunder there are many opportunities to choose different moves before reaching the blunder position. I am wondering about the following idea. In going forward if it can be detected that white/black has, at a node, plenty of safe alternatives and then at a later node black/white also has plenty of safe alternatives then any deeper search may not be necessary. Another idea that involves a double null move says that if you can make two null moves (NM,OM,NM) and the score is not brought below beta then that position does not need to ever be searched any deeper. Maybe store that position in the TT with infinite draft(depth).
However, like you said yourself, the cost of null move is minimal. How much therefore can you expect to gain by making it just a little bit faster or even a lot faster? Not much I believe to be the answer.
An area to think about for search reduction is based on the idea that deeper searches push blunders farther away from the root when backing up scores. Therefore if the backed up score is from a PV that includes a blunder there are many opportunities to choose different moves before reaching the blunder position. I am wondering about the following idea. In going forward if it can be detected that white/black has, at a node, plenty of safe alternatives and then at a later node black/white also has plenty of safe alternatives then any deeper search may not be necessary. Another idea that involves a double null move says that if you can make two null moves (NM,OM,NM) and the score is not brought below beta then that position does not need to ever be searched any deeper. Maybe store that position in the TT with infinite draft(depth).
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through

 Posts: 11622
 Joined: Wed Mar 08, 2006 7:57 pm
 Location: Redmond, WA USA
 Contact:
Re: thoughts on nullmove reduction
I suspect that the NM,OM,NM with infinite draft won't gain much.Michael Sherwin wrote:Just in the last few years a variable r factor has proven to add strength. A simple approach might be the equation, r = (depth + 14) / 5. This means that r starts off at the classic r = 3 and reaches r = 4 when remaining depth is at least 6. Raising r before depth remaining of 6 gives a substantial loss of strength. Results for the above formula vary widely depending on the computer opponent. This formula, r = (depth + 18) / 6 may be better. There are of course endless formulas. You can search for, " variable r factor" and you should find some post by Dann Corbit on this subject.
However, like you said yourself, the cost of null move is minimal. How much therefore can you expect to gain by making it just a little bit faster or even a lot faster? Not much I believe to be the answer.
An area to think about for search reduction is based on the idea that deeper searches push blunders farther away from the root when backing up scores. Therefore if the backed up score is from a PV that includes a blunder there are many opportunities to choose different moves before reaching the blunder position. I am wondering about the following idea. In going forward if it can be detected that white/black has, at a node, plenty of safe alternatives and then at a later node black/white also has plenty of safe alternatives then any deeper search may not be necessary. Another idea that involves a double null move says that if you can make two null moves (NM,OM,NM) and the score is not brought below beta then that position does not need to ever be searched any deeper. Maybe store that position in the TT with infinite draft(depth).
Consider the savage reductions available today. With two successful null moves in the pv line, that pathway is almost done away with anyway (trivial search compared to its siblings).
The mobility idea is interesting. I think that null moves might also get a reduction that is a function of mobility. And if the mobility is one move, then there is no reduction (for obvious reasons).