futility pruning in stockfish

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
Engin
Posts: 831
Joined: Mon Jan 05, 2009 6:40 pm
Location: Germany
Full name: Engin Üstün
Contact:

futility pruning in stockfish

Post by Engin » Wed May 25, 2011 1:33 pm

hello,

what i am until now not understand is that the stockfish team stay on the futility pruning with predicted depth instead of normal depth.

what have the LMR depth to do with futility ?

code from stockfish:

Depth predictedDepth = newDepth - reduction<NonPV>(depth, moveCount);
futilityValueScaled = futilityBase + futility_margin(predictedDepth, moveCount)
+ H.gain(pos.piece_on(move_from(move)), move_to(move));




but on Glaurung 2.2 code used with normal depth:

if(depth < 6 * OnePly && approximateEval < beta) {
if(futilityValue == VALUE_NONE)
futilityValue = evaluate(pos, ei, threadID)
+ ((depth < 2 * OnePly)?
FutilityMargin1 :
FutilityMargin2 + (depth - 2*OnePly) * 32);



the second one is that in Glaurung using move count pruning until depth 7 , but in stockfish they are pruning until depth 16 ??

is that not to much of risk ??

User avatar
Eelco de Groot
Posts: 4028
Joined: Sun Mar 12, 2006 1:40 am
Location: Groningen

Re: futility pruning in stockfish

Post by Eelco de Groot » Wed May 25, 2011 4:35 pm

Engin wrote:hello,

what i am until now not understand is that the stockfish team stay on the futility pruning with predicted depth instead of normal depth.

what have the LMR depth to do with futility ?

code from stockfish:

Depth predictedDepth = newDepth - reduction<NonPV>(depth, moveCount);
futilityValueScaled = futilityBase + futility_margin(predictedDepth, moveCount)
+ H.gain(pos.piece_on(move_from(move)), move_to(move));




but on Glaurung 2.2 code used with normal depth:

if(depth < 6 * OnePly && approximateEval < beta) {
if(futilityValue == VALUE_NONE)
futilityValue = evaluate(pos, ei, threadID)
+ ((depth < 2 * OnePly)?
FutilityMargin1 :
FutilityMargin2 + (depth - 2*OnePly) * 32);
I don't now the code of Stockfish 2.1 very well but I think this is not such a big change, also it is not changed from version 2.0 to 2.1, using the predicted depth is reasonably "logical" because if a move passes futility pruning it is then searched first with a reduced depth:

Code: Select all

      &#123;
          // Step 14. Reduced depth search
          // If the move fails high will be re-searched at full depth.
          bool doFullDepthSearch = true;
          alpha = SpNode ? sp->alpha &#58; alpha;

          if (    depth >= 3 * ONE_PLY
              && !captureOrPromotion
              && !dangerous
              && !move_is_castle&#40;move&#41;
              &&  ss->killers&#91;0&#93; != move
              &&  ss->killers&#91;1&#93; != move&#41;
          &#123;
              ss->reduction = reduction<PvNode>&#40;depth, moveCount&#41;;
              if &#40;ss->reduction&#41;
              &#123;
                  alpha = SpNode ? sp->alpha &#58; alpha;
                  Depth d = newDepth - ss->reduction;
                  value = -search<NonPV>&#40;pos, ss+1, -&#40;alpha+1&#41;, -alpha, d&#41;;

                  doFullDepthSearch = &#40;value > alpha&#41;;
              &#125;
              ss->reduction = DEPTH_ZERO; // Restore original reduction
          &#125;

It only means that the futilty margin will a little bit smaller using predicted depth as an index, but it just depends what numbers are used for the futility margin really. reduction<NonPV>(depth, moveCount); is anyway a fairly small number as far as I could see, I have not made a table but it seemed to me in most cases smaller than ONE_PLY.

In the PV (especially) I would prefer less reductions in the sidebranches but that is also a case of the overall 'architecture' of search, if you just do a lot of iterations you have more reordering of the searchtree and if you do more extensions instead, you do less reordering.
the second one is that in Glaurung using move count pruning until depth 7 , but in stockfish they are pruning until depth 16 ??

is that not to much of risk ??
I think this must depend also to some degree on the number of iterations you achieve typically but this indeed some risk, IMO. Tord used to say some time ago that he tested larger numbers but even on small devices like the iPhone seven seemed to be the right number for Glaurung, for the extended quiescence search or how you would call it. But in Rainbow Serpent I actually use depth sixteen also for value based Futility Pruning, not just move based Futility Pruning. But I do try to make it a bit safer by a lot of other measures, some of them undo a lot of the reduction effects and some measures are very little tested.

So there is in Rainbow Serpent:

Code: Select all

  inline Value futility_margin&#40;Depth d, int mn&#41; &#123; return d < 16 * ONE_PLY ? FutilityMarginsMatrix&#91;Max&#40;d, 1&#41;&#93;&#91;Min&#40;mn, 63&#41;&#93; &#58; 2 * VALUE_INFINITE; &#125;
  inline int futility_move_count&#40;Depth d&#41; &#123; return d < 16 * ONE_PLY ? FutilityMoveCountArray&#91;d&#93; &#58; 512; &#125;
but for instance I used different values in the FutilityMarginsMatrix

Code: Select all

  // Init futility margins array
  for &#40;d = 1; d < 16; d++) for &#40;mc = 0; mc < 64; mc++)
      FutilityMarginsMatrix&#91;d&#93;&#91;mc&#93; = Value &#40;Max&#40;&#40;100 + &#40;d * d&#41;) * int&#40;log&#40;double&#40;d * d&#41; / 2&#41; / log&#40;2.0&#41; + 1.001&#41; - (&#40;d+5&#41; * &#40;mc+1&#41; - 45&#41;, 45&#41;);
some of the new values are actually smaller than the original values but after a certain depth they rise a bit faster, that is if I have not made an wrong calculation in the formula used? Also for instance Rainbow Serpent has a different futilityBase so that changes futility pruning there also, I think futilityBase is usually smaller so that means less chance of getting above beta so a bit more chance of pruning the move. (And then there is actually a "double pass" futility pruning in place in Rainbow Serpent with different, usually higher, added Futilty margins so even less chance of getting through the futility pruning 8-) )

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan

Post Reply