Razoring...

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Razoring...

Post by bob »

I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.

It made a difference. But not a very big one...

Our cluster is currently having problems (once again) with the IBRIX file system (I would not recommend IBRIX to _anyone_ regardless of their glowing self-promotional comments on their web page. The thing we have has never been reliable. In any case, the Razoring improvement was _very_ small, just a few Elo. I will post the actual Elo results when I can get back to my files on the cluster. So to date, checks in q-search were worth a couple, Razoring a very few more. Has anyone actually found a big improvement (actually measured) with either of these two ideas? I have been seeing lots of "glowing comments" over the past couple of years, but experimental data is not supporting the comments...

We have been using futility pruning for several years already, so this was added on top of that. I am a bit suspicious that LMR probably provides most of the same "gain" and since it is more general (applied well inside the tree, rather than just near the frontier) it might make the futility/razoring stuff less effective than they are without LMR. I am going to continue testing to see what exactly is going on...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring... (data)

Post by bob »

The BayesElo data appears below. Here's what each thing means...

1. The Crafty-22.2R14-n versions were experiments with adjusting the lazy evaluation cutoff window. The -n is the cutoff limit (old default was 300, but if you look at these results, the best value is 125. The numbers form a nice curve, with the peak at 125 and smaller numbers get worse, as do larger numbers. So far, so good.

2. Crafty-22.2R15-125 is effectively the same as Crafty-22.2R14. I made the change and re-ran the test to confirm the result.

3. Crafty-22.2R16 is the same as R15-125 except that razoring was turned off. That dropped the rating by 2-5 elo depending on which of the two tests you use above.

Code: Select all

   1 Crafty-22.2R15-125    -7    4    4 31117   44%    38   21%     0  0  0    87 93 96 96 98 99 99 99 99100
   2 Crafty-22.2R14-125   -10    4    4 31113   44%    38   21%     0  0  0 12    62 72 73 83 99 99 99 99100
   3 Crafty-22.2R14-100   -11    5    5 31112   43%    38   21%     0  0  0  6 37    60 61 74 99 99 99 99100
   4 Crafty-22.2R14-150   -12    4    5 31115   43%    38   21%     0  0  0  3 27 39    51 65 98 99 99 99100
   5 Crafty-22.2R16-125   -12    4    4 31110   43%    38   20%     0  0  0  3 26 38 48    64 97 99 99 99100
   6 Crafty-22.2R14-75    -13    4    4 31108   43%    38   21%     0  0  0  1 16 25 34 35    95 99 99 99100
   7 Crafty-22.2R14-200   -17    5    5 31114   43%    38   21%     0  0  0  0  0  0  1  2  4    90 99 99100
   8 Crafty-22.2R14-250   -20    4    5 31113   42%    38   21%     0  0  0  0  0  0  0  0  0  9    90 99100
   9 Crafty-22.2R14-50    -23    4    4 31117   42%    38   21%     0  0  0  0  0  0  0  0  0  0  9    92100
  10 Crafty-22.2R14-300   -27    4    4 31110   41%    38   20%     0  0  0  0  0  0  0  0  0  0  0  7   100
These represent matches of 31K games against Glaurung 1 and 2, fruit 2 and toga 2...

If the lazy eval cutoff goes too low, it makes errors, obviously. As it gets larger the lazy exit errors drop, but so does overall speed. Going too far eliminates errors but increases eval cost to the point it hurts the search.
CRoberson
Posts: 2055
Joined: Mon Mar 13, 2006 2:31 am
Location: North Carolina, USA

Re: Razoring...

Post by CRoberson »

I remember from several years ago that somebody stated checks in the
qsearch are a big help if you don't have much of a king safety eval.
Otherwise, it is worth little.
jarkkop
Posts: 198
Joined: Thu Mar 09, 2006 2:44 am
Location: Helsinki, Finland

Re: Razoring... (data)

Post by jarkkop »

Have you tried in Crafty using 'see' as in Fruit for giving a bonus if square in front of passer pawn has see>=0.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring... (data)

Post by bob »

jarkkop wrote:Have you tried in Crafty using 'see' as in Fruit for giving a bonus if square in front of passer pawn has see>=0.
That is on the to-do list, which is a way of measuring the mobility of a passed pawn...
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Razoring...

Post by Gerd Isenberg »

bob wrote:I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.
Bob, can you please elaborate on how this razoring works, guess on depth == 2 (pre-frontier nodes)? Is it based on the original idea from Birmingham and Kent? A little pseudo code would be nice.

Thanks,
Gerd
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring...

Post by bob »

Gerd Isenberg wrote:
bob wrote:I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.
Bob, can you please elaborate on how this razoring works, guess on depth == 2 (pre-frontier nodes)? Is it based on the original idea from Birmingham and Kent? A little pseudo code would be nice.

Thanks,
Gerd
Simple idea, really:

Code: Select all

/*
 ************************************************************
 *                                                          *
 *   now we try a quick Razoring test.  If we are within 3  *
 *   plies of a tip, and the current eval is 3 pawns (or    *
 *   more) below beta, then we just drop into a q-search    *
 *   to try to get a quick cutoff without searching more in *
 *   a position where we are way down in material.          *
 *                                                          *
 ************************************************************
 */
   if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
    if &#40;alpha == beta - 1&#41; &#123;
      if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123;
        value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
        if &#40;value < beta&#41;
          return &#40;value&#41;;
      &#125;
    &#125;
  &#125;
In crafty, the above is done after the null-move has been tried and failed low, or if it wasn't tried at all. I am playing with "razoring allowed". The original discussion I had made notes from suggested the usual "no search extensions and such" (This is in Heinz's "scalable search ...."). I have been experimenting with various reasons to not do this. Current razor_depth is 3 plies, which is equivalent to his "pre-pre-frontier node" concept. I was experimenting with the margin and currently don't have any accurate results yet, thanks to our IBRIX filesystem once again crashing during the night. I am using 3 pawns, but have seen good results with 2 pawns as well. More once I can get the calibration tests done on the cluster.

The idea is that if you are well behind, with just a couple of plies left, there is not much that will get that material back besides a capture, so just collapse into a q-search and see if you can get the material back with a shallow search. If not, bail out, otherwise continue searching normally.

This was a quick-and-dirty approach. I am going to rewrite search so that the razoring/futility-pruning (and maybe even extended futility pruning) is all done at one place inside the main make/search/unmake loop where I have more information available about a particular move, which will probably make this work better...
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Razoring...

Post by Gerd Isenberg »

bob wrote: Simple idea, really:

Code: Select all

/*
 ************************************************************
 *                                                          *
 *   now we try a quick Razoring test.  If we are within 3  *
 *   plies of a tip, and the current eval is 3 pawns &#40;or    *
 *   more&#41; below beta, then we just drop into a q-search    *
 *   to try to get a quick cutoff without searching more in *
 *   a position where we are way down in material.          *
 *                                                          *
 ************************************************************
 */
   if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
    if &#40;alpha == beta - 1&#41; &#123;
      if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123;
        value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
        if &#40;value < beta&#41;
          return &#40;value&#41;;
      &#125;
    &#125;
  &#125;
In crafty, the above is done after the null-move has been tried and failed low, or if it wasn't tried at all. I am playing with "razoring allowed". The original discussion I had made notes from suggested the usual "no search extensions and such" (This is in Heinz's "scalable search ...."). I have been experimenting with various reasons to not do this. Current razor_depth is 3 plies, which is equivalent to his "pre-pre-frontier node" concept. I was experimenting with the margin and currently don't have any accurate results yet, thanks to our IBRIX filesystem once again crashing during the night. I am using 3 pawns, but have seen good results with 2 pawns as well. More once I can get the calibration tests done on the cluster.

The idea is that if you are well behind, with just a couple of plies left, there is not much that will get that material back besides a capture, so just collapse into a q-search and see if you can get the material back with a shallow search. If not, bail out, otherwise continue searching normally.

This was a quick-and-dirty approach. I am going to rewrite search so that the razoring/futility-pruning (and maybe even extended futility pruning) is all done at one place inside the main make/search/unmake loop where I have more information available about a particular move, which will probably make this work better...
I see, this is more Heinz's interpretation of Razoring and more a kind of reduction, even if more aggressive, if three plies from the tips directly jumping to the qsearch with checks. At depth 3 mymove -> yourmove -> mymove may improve alpha, if the first mymove, even if quite and no check, introduces some tactical threats like a fork, double (discovered) attack or pinning someone and that like, but may be that is only good for tactical tests rather than for game play. Wasn't that Razoring used in Strelka as well?

The "ancient" (1977) Birmingham and Kent approach of razoring was to generate all moves (say on expected all- or fail-low nodes as done stongly by your null-window and eval conditions), to sort them by evaluation after making and unmaking each move (which already takes some time and might be considered as a reduced search without quiscence).

While later fetching the sorted moves in decreasing order - as long as the (stored) evaluation of each move exceeds alpha - they are tried as usual. Once a move statically does no longer improve alpha, this and all further moves (sorted below) are pruned without any further investigation and alpha is returned.

The idea was used at depth == 2 nodes, based on the nullmove observation, the opponent may improve his score after mymove with hismove (from my point of view even more below alpha). There was a proposal of deep razoring at depth == 4 nodes with the weaker assumption, that hismove -> mymove -> hismove should improve his score as well.
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Razoring...

Post by Gerd Isenberg »

Gerd Isenberg wrote:Wasn't that Razoring used in Strelka as well?
Yep, see Razoring in cpw.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Razoring...

Post by mjlef »

bob wrote:
Gerd Isenberg wrote:
bob wrote:I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.
Bob, can you please elaborate on how this razoring works, guess on depth == 2 (pre-frontier nodes)? Is it based on the original idea from Birmingham and Kent? A little pseudo code would be nice.

Thanks,
Gerd
Simple idea, really:

Code: Select all

/*
 ************************************************************
 *                                                          *
 *   now we try a quick Razoring test.  If we are within 3  *
 *   plies of a tip, and the current eval is 3 pawns &#40;or    *
 *   more&#41; below beta, then we just drop into a q-search    *
 *   to try to get a quick cutoff without searching more in *
 *   a position where we are way down in material.          *
 *                                                          *
 ************************************************************
 */
   if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
    if &#40;alpha == beta - 1&#41; &#123;
      if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123;
        value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
        if &#40;value < beta&#41;
          return &#40;value&#41;;
      &#125;
    &#125;
  &#125;
In crafty, the above is done after the null-move has been tried and failed low, or if it wasn't tried at all. I am playing with "razoring allowed". The original discussion I had made notes from suggested the usual "no search extensions and such" (This is in Heinz's "scalable search ...."). I have been experimenting with various reasons to not do this. Current razor_depth is 3 plies, which is equivalent to his "pre-pre-frontier node" concept. I was experimenting with the margin and currently don't have any accurate results yet, thanks to our IBRIX filesystem once again crashing during the night. I am using 3 pawns, but have seen good results with 2 pawns as well. More once I can get the calibration tests done on the cluster.

The idea is that if you are well behind, with just a couple of plies left, there is not much that will get that material back besides a capture, so just collapse into a q-search and see if you can get the material back with a shallow search. If not, bail out, otherwise continue searching normally.

This was a quick-and-dirty approach. I am going to rewrite search so that the razoring/futility-pruning (and maybe even extended futility pruning) is all done at one place inside the main make/search/unmake loop where I have more information available about a particular move, which will probably make this work better...
Bob,

How did the run for varying the razoring margin turn out?

Mark