Adjustable search pruning depending on time control

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: Adjustable search pruning depending on time control

Post by Rein Halbersma »

lkaufman wrote:Among top engines, only Stockfish changes the LMR formula based on depth, as far as I know. They reduce more as the depth goes up. In our testing with Komodo, we have never found evidence that considering depth in the LMR formula has any benefit. Of course, at very low depths where LMP, futility, etc. are done, you can prune more and more the closer you are to the leaf. Most top engines do increase the amount of null move reduction with increased depth.

As far as I know, everyone bases these depth-related changes on the remaining depth, rather than the time control. It is not apparent how considering the time control would help further, but it is possible.
IIRC, Stockfish only reduces logarithmically more as depth increases. This means that every move will eventually be searched as the nominal depth at the root increases (i.e. reduced depth is increasing in nominal depth).
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

Rein Halbersma wrote:
lkaufman wrote:Among top engines, only Stockfish changes the LMR formula based on depth, as far as I know. They reduce more as the depth goes up. In our testing with Komodo, we have never found evidence that considering depth in the LMR formula has any benefit. Of course, at very low depths where LMP, futility, etc. are done, you can prune more and more the closer you are to the leaf. Most top engines do increase the amount of null move reduction with increased depth.

As far as I know, everyone bases these depth-related changes on the remaining depth, rather than the time control. It is not apparent how considering the time control would help further, but it is possible.
IIRC, Stockfish only reduces logarithmically more as depth increases. This means that every move will eventually be searched as the nominal depth at the root increases (i.e. reduced depth is increasing in nominal depth).
Of course; to do otherwise would be just stupid, I think. But we have found no evidence that this policy of increased reduction with increased depth is sound, at least in Komodo. As far as I know, Rybka, Ippo/Ivanhoe, Critter, Houdini (at least 1.5, don't know about 3.0) and all other engines of which I have knowledge agree with Komodo on this point. I would like to know if anyone has evidence that this policy helps Stockfish. Of course to determine this one must decide which depth should be the one that defines the reduction in Stockfish if it is not depth-dependent. I know Stockfish got a big elo gain when they implemented this, but that was compared to a super-conservative previous LMR so it doesn't address the question.
User avatar
rvida
Posts: 481
Joined: Thu Apr 16, 2009 12:00 pm
Location: Slovakia, EU

Re: Adjustable search pruning depending on time control

Post by rvida »

lkaufman wrote:As far as I know, Rybka, Ippo/Ivanhoe, Critter, Houdini (at least 1.5, don't know about 3.0) and all other engines of which I have knowledge agree with Komodo on this point.
Not entirely true. They do some depth based reductions - in cut nodes when evading check.

The following code snippet is from Ivanhoe's "cut_node.c", function MyCutCheck:

Code: Select all

if (cnt >= 1)
{
  if (depth > 8)
    REDUCTION = BSR (depth - 7);
  else
    REDUCTION = 0;
  REDUCTION += 1 + MIN (cnt, 2);
  new_depth = depth + EXTEND_IN_CHECK - REDUCTION -2;
...
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

rvida wrote:
lkaufman wrote:As far as I know, Rybka, Ippo/Ivanhoe, Critter, Houdini (at least 1.5, don't know about 3.0) and all other engines of which I have knowledge agree with Komodo on this point.
Not entirely true. They do some depth based reductions - in cut nodes when evading check.

The following code snippet is from Ivanhoe's "cut_node.c", function MyCutCheck:

Code: Select all

if (cnt >= 1)
{
  if (depth > 8)
    REDUCTION = BSR (depth - 7);
  else
    REDUCTION = 0;
  REDUCTION += 1 + MIN (cnt, 2);
  new_depth = depth + EXTEND_IN_CHECK - REDUCTION -2;
...
Yes, I quite forgot about that little detail. We couldn't find any benefit from this idea of varying reduction with depth when in check. Even reducing in check at all is rather a close call. I'm pretty sure that if the above code has any benefit, it would be worth a fraction of an elo point (compared to just using the 1 + MIN (cnt, 2) rule) and therefore would require hundreds of thousands of games to confirm. If this idea is also in Rybka (I'm not sure about this but I seem to recall so), it would be a sure sign of copying by Ippo as it is hard to credit the notion that they independently checked it out to this extent. Do you have any idea of the logic behind this piece of code? To me it makes little sense to reduce by many plies when in check, even at cut nodes, just because the depth is high.
User avatar
rvida
Posts: 481
Joined: Thu Apr 16, 2009 12:00 pm
Location: Slovakia, EU

Re: Adjustable search pruning depending on time control

Post by rvida »

lkaufman wrote: Yes, I quite forgot about that little detail. We couldn't find any benefit from this idea of varying reduction with depth when in check. Even reducing in check at all is rather a close call. I'm pretty sure that if the above code has any benefit, it would be worth a fraction of an elo point (compared to just using the 1 + MIN (cnt, 2) rule) and therefore would require hundreds of thousands of games to confirm.
I think the methodology behind this was to push LMR as far as possible until it starts to measurably hurt, then pull one step back.
lkaufman wrote: If this idea is also in Rybka (I'm not sure about this but I seem to recall so), it would be a sure sign of copying by Ippo as it is hard to credit the notion that they independently checked it out to this extent. Do you have any idea of the logic behind this piece of code? To me it makes little sense to reduce by many plies when in check, even at cut nodes, just because the depth is high.
In Rybka 3 the amount of reduction does not rise logarithmically with depth:

Code: Select all

reduction = 0;

if (move_count >= 2)  // one ply reduction starting with 3rd move
  reduction = 2;

if (cut_node && depth >= 12 && move_count >= 1)  // reduce 2 more plies
  reduction += 4;

lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

rvida wrote:
lkaufman wrote: Yes, I quite forgot about that little detail. We couldn't find any benefit from this idea of varying reduction with depth when in check. Even reducing in check at all is rather a close call. I'm pretty sure that if the above code has any benefit, it would be worth a fraction of an elo point (compared to just using the 1 + MIN (cnt, 2) rule) and therefore would require hundreds of thousands of games to confirm.
I think the methodology behind this was to push LMR as far as possible until it starts to measurably hurt, then pull one step back.
lkaufman wrote: If this idea is also in Rybka (I'm not sure about this but I seem to recall so), it would be a sure sign of copying by Ippo as it is hard to credit the notion that they independently checked it out to this extent. Do you have any idea of the logic behind this piece of code? To me it makes little sense to reduce by many plies when in check, even at cut nodes, just because the depth is high.
In Rybka 3 the amount of reduction does not rise logarithmically with depth:

Code: Select all

reduction = 0;

if (move_count >= 2)  // one ply reduction starting with 3rd move
  reduction = 2;

if (cut_node && depth >= 12 && move_count >= 1)  // reduce 2 more plies
  reduction += 4;

Thanks. The methodology you describe would tend to lead to overdoing pruning or reduction, which is my general impression of what Ippo has done, so I think you are right.
Can you (or anyone else reading this) think of any logic behind increasing reduction with depth when in check but not otherwise?
User avatar
Eelco de Groot
Posts: 4567
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: Adjustable search pruning depending on time control

Post by Eelco de Groot »

lkaufman wrote:
Rein Halbersma wrote:
lkaufman wrote:Among top engines, only Stockfish changes the LMR formula based on depth, as far as I know. They reduce more as the depth goes up. In our testing with Komodo, we have never found evidence that considering depth in the LMR formula has any benefit. Of course, at very low depths where LMP, futility, etc. are done, you can prune more and more the closer you are to the leaf. Most top engines do increase the amount of null move reduction with increased depth.

As far as I know, everyone bases these depth-related changes on the remaining depth, rather than the time control. It is not apparent how considering the time control would help further, but it is possible.
IIRC, Stockfish only reduces logarithmically more as depth increases. This means that every move will eventually be searched as the nominal depth at the root increases (i.e. reduced depth is increasing in nominal depth).
Of course; to do otherwise would be just stupid, I think. But we have found no evidence that this policy of increased reduction with increased depth is sound, at least in Komodo. As far as I know, Rybka, Ippo/Ivanhoe, Critter, Houdini (at least 1.5, don't know about 3.0) and all other engines of which I have knowledge agree with Komodo on this point. I would like to know if anyone has evidence that this policy helps Stockfish. Of course to determine this one must decide which depth should be the one that defines the reduction in Stockfish if it is not depth-dependent. I know Stockfish got a big elo gain when they implemented this, but that was compared to a super-conservative previous LMR so it doesn't address the question.
I don't quite see how it can be measured reliably if you don't also take into account what is not reduced. It may be very different for all of those programs. In the case of Stockfish, if you just say that tactical moves are not reduced then it may also be that the authors did not want to lose much tactical strength to the LMR and they have not only looked at elo when implementing this strong pruning. But anyway, the exact reduction is not so terribly critical I think, I believe the log depth (or log X whatever X is) can go to something like six easily but to go to seven it needs to do a lot of iterations more, so that forms more of a plateau than that the reduction keeps increasing.

In Rainbow Serpent the reductions are even stronger dependant on depth, I can't say I measured if this is far past the point where LMR starts to hurt but it is probably not as important as the increased Late Move Pruning in place....

Code: Select all

  // Init reductions array
  for &#40;hd = 1; hd < 64; hd++) for &#40;mc = 1; mc < 64; mc++)
  &#123;
      double    pvRed = log&#40;double&#40;&#40;hd + 1&#41; * &#40;hd - 1&#41;)) * log&#40;double&#40;mc&#41;) / 3.0;
      double nonPVRed = 0.33 + log&#40;double&#40;&#40;hd + 1&#41; * &#40;hd - 1&#41;)) * log&#40;double&#40;mc&#41;) / 2.25;
      Reductions&#91;1&#93;&#91;hd&#93;&#91;mc&#93; = &#40;int8_t&#41; (   pvRed >= 1.0 ? floor&#40;   pvRed * int&#40;ONE_PLY&#41;) &#58; 0&#41;;
      Reductions&#91;0&#93;&#91;hd&#93;&#91;mc&#93; = &#40;int8_t&#41; &#40;nonPVRed >= 1.0 ? floor&#40;nonPVRed * int&#40;ONE_PLY&#41;) &#58; 0&#41;;
Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

Eelco de Groot wrote:
lkaufman wrote:
Rein Halbersma wrote:
lkaufman wrote:Among top engines, only Stockfish changes the LMR formula based on depth, as far as I know. They reduce more as the depth goes up. In our testing with Komodo, we have never found evidence that considering depth in the LMR formula has any benefit. Of course, at very low depths where LMP, futility, etc. are done, you can prune more and more the closer you are to the leaf. Most top engines do increase the amount of null move reduction with increased depth.

As far as I know, everyone bases these depth-related changes on the remaining depth, rather than the time control. It is not apparent how considering the time control would help further, but it is possible.
IIRC, Stockfish only reduces logarithmically more as depth increases. This means that every move will eventually be searched as the nominal depth at the root increases (i.e. reduced depth is increasing in nominal depth).
Of course; to do otherwise would be just stupid, I think. But we have found no evidence that this policy of increased reduction with increased depth is sound, at least in Komodo. As far as I know, Rybka, Ippo/Ivanhoe, Critter, Houdini (at least 1.5, don't know about 3.0) and all other engines of which I have knowledge agree with Komodo on this point. I would like to know if anyone has evidence that this policy helps Stockfish. Of course to determine this one must decide which depth should be the one that defines the reduction in Stockfish if it is not depth-dependent. I know Stockfish got a big elo gain when they implemented this, but that was compared to a super-conservative previous LMR so it doesn't address the question.
I don't quite see how it can be measured reliably if you don't also take into account what is not reduced. It may be very different for all of those programs. In the case of Stockfish, if you just say that tactical moves are not reduced then it may also be that the authors did not want to lose much tactical strength to the LMR and they have not only looked at elo when implementing this strong pruning. But anyway, the exact reduction is not so terribly critical I think, I believe the log depth (or log X whatever X is) can go to something like six easily but to go to seven it needs to do a lot of iterations more, so that forms more of a plateau than that the reduction keeps increasing.

In Rainbow Serpent the reductions are even stronger dependant on depth, I can't say I measured if this is far past the point where LMR starts to hurt but it is probably not as important as the increased Late Move Pruning in place....

Code: Select all

  // Init reductions array
  for &#40;hd = 1; hd < 64; hd++) for &#40;mc = 1; mc < 64; mc++)
  &#123;
      double    pvRed = log&#40;double&#40;&#40;hd + 1&#41; * &#40;hd - 1&#41;)) * log&#40;double&#40;mc&#41;) / 3.0;
      double nonPVRed = 0.33 + log&#40;double&#40;&#40;hd + 1&#41; * &#40;hd - 1&#41;)) * log&#40;double&#40;mc&#41;) / 2.25;
      Reductions&#91;1&#93;&#91;hd&#93;&#91;mc&#93; = &#40;int8_t&#41; (   pvRed >= 1.0 ? floor&#40;   pvRed * int&#40;ONE_PLY&#41;) &#58; 0&#41;;
      Reductions&#91;0&#93;&#91;hd&#93;&#91;mc&#93; = &#40;int8_t&#41; &#40;nonPVRed >= 1.0 ? floor&#40;nonPVRed * int&#40;ONE_PLY&#41;) &#58; 0&#41;;
Regards, Eelco
Maybe the exact reduction is not "terribly critical", but of course it does make some difference. The question is whether there is any evidence that having LMR vary with depth is superior to just optimizing it without a depth dependence. Have you tested your engine against a version without depth-dependent LMR? In general, we never add a complication to Komodo (such as depth-dependence of LMR would be) without evidence that it helps the elo. I would have thought that would be standard practice, but perhaps not. In your case, why would you add this complication unless you first proved that it helped (perhaps you did, I'm just asking in case you did not)?

Regards, Larry
Pio
Posts: 334
Joined: Sat Feb 25, 2012 10:42 pm
Location: Stockholm

Re: Adjustable search pruning depending on time control

Post by Pio »

Hi Larry!

I think there is some logic in having the reductions depend on depth and that is due to alpha-beta. There is less reason looking at bad looking moves close to the leaves since they will then only have few moves to improve their scores relative to their sibblings. If the moves cannot improve the score there is no point evaluating them.

I think reducing moves more closer to the leaves will work as a complement to lazy evaluation.

I might have understood everything completely wrong.
User avatar
Eelco de Groot
Posts: 4567
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: Adjustable search pruning depending on time control

Post by Eelco de Groot »

No I have not tested this against other possible formulas. Other than if the new rule is clearly inferior, I think I will find out after a while. (And as I said, I think the exceptions to the reduction, are as least as important as the rule itself. Also making the rule more dependant on depth meant the difference in reductions based on movecount is not changed, which I would find a lot more of a risk)

There are so many possible variations you could try, and if you have to test all of the 100s of variables one by one, what time is there left to try new ideas, different algorithms? It seems to me that expressing the reduction as a function of depth is just a way of getting a formulation expressing reduction as a percentage (of depth). That seems perfectly natural. Others will use a table, but you can tune both to get exactly the same results. This one is easier to modify and perhaps easier to tune. But I'm no mathematician and haven't tuned it, Marco and Joona would have a better explanation I'm sure.

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan