Limiting Quiescent Search Depth

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Limiting Quiescent Search Depth

Post by bob »

Gerd Isenberg wrote:
bob wrote: Yes, but read my explanation. You have one more move before you drop to the q-search. Rather than making it and then calling the q-search, you just ask "will this produce a stand-pat fail high if I try it and advance to the next ply?" and if the answer is "yes" you just save the effort of making/unmaking the move. It really isn't pruning as I would define it. The only "risk" is that the move you don't make will also have an influence on the stand-pat score, so you toss in an error_margin type value to hopefully account for that.

And he is not "skipping the moves" if you look at his code for the other versions, he just reduces the depth by 1 ply, he never avoids searching them, at least not in the copy of his book that I used when implementing this stuff in Crafty when I tested it last year.
Of course futility pruning (depth == 1) only saves make/unmake (and (lazy) eval), nevertheless it is forward-pruning per definition, even more extended futility pruning and limited razoring. This is Heinz' code:

Code: Select all

int search(int alpha, int beta, int depth, int move, node parent) { 
  node current; 
  int extend, fmax, fprune, fscore, score, selective; 

  /* execute the opponent's move and initialize some local variables */ 
  make move(parent, move, &current); 
  fprune = selective = 0; 
  score = -INFINITY; 

  /* determine if and how to extend the search at the current node */ 
  extend = extensions(depth, move, current); 
  depth += extend; 

  /* decide about limited razoring at pre-pre-frontier nodes */ 
  fscore = (mat balance(current) + razor margin); 
  if (!extend && &#40;depth==PRE_PRE_FRONTIER&#41; && &#40;fscore <= alpha&#41; 
  && &#40;opposing pieces&#40;current&#41; > 3&#41;) depth = PRE_FRONTIER; 

  /* decide about extended futility pruning at pre-frontier nodes */ 
  fscore = &#40;mat balance&#40;current&#41; + extd futil margin&#41;; 
  if (!extend && &#40;depth == PRE_FRONTIER&#41; && &#40;fscore <= alpha&#41;) &#123; 
    fprune = selective = 1; 
    score = fmax = fscore; 
  &#125; 

  /* decide about selective futility pruning at frontier nodes */ 
  fscore = &#40;mat balance&#40;current&#41; + futil margin&#41;; 
  if (!check&#40;move&#41; && &#40;depth == FRONTIER&#41; && &#40;fscore <= alpha&#41;) &#123; 
    fprune = selective = 1; 
    score = fmax = fscore; 
  &#125; 

  /* "selective == 1" and "-INFINITY < score <= alpha" hold if 
   * the search will be selective at the current node. 
   * 
   * continue like normal but prune all futile moves if "fprune == 1" 
   */ 
  ... 
  for &#40;move= first&#40;current&#41;, &#40;move!=0&#41;, move= next&#40;current,move&#41;) 
    if (!fprune || check&#40;move&#41; /* recursive search call */ 
                || &#40;fmax + mat gain&#40;move&#41; > alpha&#41;) &#123;...&#125; 
  ... 
&#125; 

Figure 1&#58; Search with Extended Futility Pruning and Limited Razoring. 
I understand his code, it is in Crafty as well. My point was that this is less a pruning idea since we are not really pruning anything, we are just short-cutting the search code by avoiding the make/unmake/recursive-search-call. Very much like lazy eval avoids executing code that we notice is not going to help us at all.

If you consider reductions to be pruning, then we are still in agreement since I believe this is more correctly called a reduction approach since that is the effective result. Typically, in AI, "pruning" means to take a branch and declare "we will not search this branch at all..." which is not exactly what a reduction does, although in effect it does since reduced depth eliminates some branches automatically.
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Limiting Quiescent Search Depth

Post by Gerd Isenberg »

bob wrote: I understand his code, it is in Crafty as well. My point was that this is less a pruning idea since we are not really pruning anything, we are just short-cutting the search code by avoiding the make/unmake/recursive-search-call. Very much like lazy eval avoids executing code that we notice is not going to help us at all.

If you consider reductions to be pruning, then we are still in agreement since I believe this is more correctly called a reduction approach since that is the effective result. Typically, in AI, "pruning" means to take a branch and declare "we will not search this branch at all..." which is not exactly what a reduction does, although in effect it does since reduced depth eliminates some branches automatically.
I agree that near the tips the difference between reduction and pruning vanishes. Futility pruning (assuming material balance far below alpha) at depth 1 behaves like qsearch, thus despite other safety margins, it is like a reduced search.

But again, in my understanding of terms, if we use a condition at depth > 0 to skip moves, we prune rather than reduce - at least syntactically ;-)

Code: Select all

// Futility Pruning
bool prune = matBalance + margin&#91;depth&#93; <= alpha;
for &#40;all moves ) &#123;
  if ( prune && gain&#40;move&#41; <= alpha&#41;
     continue; // skip move
  make&#40;move&#41;;
  val = -search &#40;depth - 1, ...);
  unmake&#40;move&#41;;
  ...
&#125;

Code: Select all

// Reduction
for &#40;all moves ) &#123;
  R = ...
  make&#40;move&#41;;
  val = -search &#40;depth - 1 - R, ...);
  unmake&#40;move&#41;;
  ...
&#125;
A related issue is your proposed razoring:

Code: Select all

/*
************************************************************
*                                                          *
* now we try a quick Razoring test. If we are within 3     *
* plies of a tip, and the current eval is 3 pawns &#40;or      *
* more&#41; below beta, then we just drop into a q-search      *
* to try to get a quick cutoff without searching more in   *
* a position where we are way down in material.            *
*                                                          *
************************************************************
*/
if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
  if &#40;alpha == beta - 1&#41; &#123; // null window ?
    if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123; // likely a fail-low node ?
      value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
      if &#40;value < beta&#41; 
        return value; // fail soft
    &#125;
  &#125;
&#125;
Strelka is quite similar in the scout part of pvs. The depth == 1 case misses the (new_value < beta) condition, because if failing high, a apparently winning capture was found, and there is no hope a quiet move would do better at frontier nodes.

Code: Select all

  value = eval + 125;
  if &#40;value < beta&#41; &#123;
    if &#40;depth == 1&#41; &#123;
      new_value = qsearch&#40;...);
      return max&#40;new_value, value&#41;;
    &#125;
    value += 175;
    if &#40;value < beta && depth <= 3&#41; &#123;
      new_value = qsearch&#40;...);
      if &#40;new_value < beta&#41;
         return max&#40;new_value, value&#41;;
    &#125;
  &#125;
Seems to me these razoring approaches make futility pruning obsolete, at least the implementation of Strelka. What do you do, when you fail high from razoring qsearch at depth == 1? Continuing depth == 1 search with futility pruning?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Limiting Quiescent Search Depth

Post by bob »

Gerd Isenberg wrote:
bob wrote: I understand his code, it is in Crafty as well. My point was that this is less a pruning idea since we are not really pruning anything, we are just short-cutting the search code by avoiding the make/unmake/recursive-search-call. Very much like lazy eval avoids executing code that we notice is not going to help us at all.

If you consider reductions to be pruning, then we are still in agreement since I believe this is more correctly called a reduction approach since that is the effective result. Typically, in AI, "pruning" means to take a branch and declare "we will not search this branch at all..." which is not exactly what a reduction does, although in effect it does since reduced depth eliminates some branches automatically.
I agree that near the tips the difference between reduction and pruning vanishes. Futility pruning (assuming material balance far below alpha) at depth 1 behaves like qsearch, thus despite other safety margins, it is like a reduced search.

But again, in my understanding of terms, if we use a condition at depth > 0 to skip moves, we prune rather than reduce - at least syntactically ;-)

Code: Select all

// Futility Pruning
bool prune = matBalance + margin&#91;depth&#93; <= alpha;
for &#40;all moves ) &#123;
  if ( prune && gain&#40;move&#41; <= alpha&#41;
     continue; // skip move
  make&#40;move&#41;;
  val = -search &#40;depth - 1, ...);
  unmake&#40;move&#41;;
  ...
&#125;

Code: Select all

// Reduction
for &#40;all moves ) &#123;
  R = ...
  make&#40;move&#41;;
  val = -search &#40;depth - 1 - R, ...);
  unmake&#40;move&#41;;
  ...
&#125;
A related issue is your proposed razoring:

Code: Select all

/*
************************************************************
*                                                          *
* now we try a quick Razoring test. If we are within 3     *
* plies of a tip, and the current eval is 3 pawns &#40;or      *
* more&#41; below beta, then we just drop into a q-search      *
* to try to get a quick cutoff without searching more in   *
* a position where we are way down in material.            *
*                                                          *
************************************************************
*/
if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
  if &#40;alpha == beta - 1&#41; &#123; // null window ?
    if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123; // likely a fail-low node ?
      value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
      if &#40;value < beta&#41; 
        return value; // fail soft
    &#125;
  &#125;
&#125;
Strelka is quite similar in the scout part of pvs. The depth == 1 case misses the (new_value < beta) condition, because if failing high, a apparently winning capture was found, and there is no hope a quiet move would do better at frontier nodes.

Code: Select all

  value = eval + 125;
  if &#40;value < beta&#41; &#123;
    if &#40;depth == 1&#41; &#123;
      new_value = qsearch&#40;...);
      return max&#40;new_value, value&#41;;
    &#125;
    value += 175;
    if &#40;value < beta && depth <= 3&#41; &#123;
      new_value = qsearch&#40;...);
      if &#40;new_value < beta&#41;
         return max&#40;new_value, value&#41;;
    &#125;
  &#125;
Seems to me these razoring approaches make futility pruning obsolete, at least the implementation of Strelka. What do you do, when you fail high from razoring qsearch at depth == 1? Continuing depth == 1 search with futility pruning?
You are right, and I need to test this. I didn't implement futility pruning in Crafty, and the way it currently works we just drop into q-search. I added the extended futility code earlier this year. I'm fixing to test the correct version of futility pruning to see if it is better or worse, since the current version is worth very little although it is also very safe...

Thanks for pointing this out. I find Heinz's code _very_ difficult to read because of the major difference in how he wrote his search.

In Crafty, I call search, which iterates over the move list and calls search recursively for each possible move. heinz (in dark thought) calls search and passes it a move to make. Search then makes this move, and then generates moves and passes them to search one at a time. This turns the definition of "frontier" and such into something different, since his depth=1 is my depth=2 because of this difference. Had me pulling my hair out when I first worked on this, because if I use his "depth" trigger, nothing happens since at depth=1 I am going to q-search the next ply anyway, so going to it "early" is the same as going to it "normally". :)
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Limiting Quiescent Search Depth

Post by Gerd Isenberg »

bob wrote:I find Heinz's code _very_ difficult to read because of the major difference in how he wrote his search.

In Crafty, I call search, which iterates over the move list and calls search recursively for each possible move. heinz (in dark thought) calls search and passes it a move to make. Search then makes this move, and then generates moves and passes them to search one at a time. This turns the definition of "frontier" and such into something different, since his depth=1 is my depth=2 because of this difference. Had me pulling my hair out when I first worked on this, because if I use his "depth" trigger, nothing happens since at depth=1 I am going to q-search the next ply anyway, so going to it "early" is the same as going to it "normally". :)
I found Heinz' code also hard to read. Especially the part where "Limited razoring" implies extended futility pruning later:

Code: Select all

// limited razoring
if ( depth==3 && ... )
  depth = 2; // limited razoring true also implies extended futility pruning

// extended futility pruning
if ( depth==2 && ... )
Passing moves to search I don't like as well, because it makes the search less versatile considering control flow and multiple exits and making/unmaking moves twice in case of PVS re-searches. But I don't see any depth confusion related to this - it should not care whether you make/unmake before or after depth decrement.

Code: Select all

int search&#40;int depth, ...)
  for &#40;all moves ) &#123;
    make&#40;move&#41;;
    val = -search &#40;depth - 1, ...);
    unmake&#40;move&#41;;
    ...
  &#125;
&#125;

Code: Select all

int search&#40;TMove parentmove, int depth, ...)
  make&#40;parentmove&#41;;
  for &#40;all moves ) &#123;
    val = -search &#40;move, depth - 1, ...);
    ...
  &#125;
  unmake&#40;parentmove&#41;;
  ...
&#125;
May be a matter of customization what one used first. I found Heinz' definition of horizon nodes (depth == 0) frontier nodes (depth == 1), pre-frontier nodes (depth == 2) and pre-pre-frontier nodes (depth == 3) quite instructive ;-)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Limiting Quiescent Search Depth

Post by bob »

Gerd Isenberg wrote:
bob wrote:I find Heinz's code _very_ difficult to read because of the major difference in how he wrote his search.

In Crafty, I call search, which iterates over the move list and calls search recursively for each possible move. heinz (in dark thought) calls search and passes it a move to make. Search then makes this move, and then generates moves and passes them to search one at a time. This turns the definition of "frontier" and such into something different, since his depth=1 is my depth=2 because of this difference. Had me pulling my hair out when I first worked on this, because if I use his "depth" trigger, nothing happens since at depth=1 I am going to q-search the next ply anyway, so going to it "early" is the same as going to it "normally". :)
I found Heinz' code also hard to read. Especially the part where "Limited razoring" implies extended futility pruning later:

Code: Select all

// limited razoring
if ( depth==3 && ... )
  depth = 2; // limited razoring true also implies extended futility pruning

// extended futility pruning
if ( depth==2 && ... )
Passing moves to search I don't like as well, because it makes the search less versatile considering control flow and multiple exits and making/unmaking moves twice in case of PVS re-searches. But I don't see any depth confusion related to this - it should not care whether you make/unmake before or after depth decrement.

Code: Select all

int search&#40;int depth, ...)
  for &#40;all moves ) &#123;
    make&#40;move&#41;;
    val = -search &#40;depth - 1, ...);
    unmake&#40;move&#41;;
    ...
  &#125;
&#125;

Code: Select all

int search&#40;TMove parentmove, int depth, ...)
  make&#40;parentmove&#41;;
  for &#40;all moves ) &#123;
    val = -search &#40;move, depth - 1, ...);
    ...
  &#125;
  unmake&#40;parentmove&#41;;
  ...
&#125;
May be a matter of customization what one used first. I found Heinz' definition of horizon nodes (depth == 0) frontier nodes (depth == 1), pre-frontier nodes (depth == 2) and pre-pre-frontier nodes (depth == 3) quite instructive ;-)
The confusion for me is that doing a reduction with depth=1 is meaningless in Crafty, because the next ply is going to be a q-search ply whether I reduce or not. It took a while for his approach to "sink in" as I have never seen a search written like that in the past. Most have used Ken Thompson's original recursive negamax formulation because it is so clean.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Limiting Quiescent Search Depth

Post by bob »

BTW, making this change has not altered the program's Elo at all after 10,000 games. Test is still in progress, but the Elo of the new version is a dead match for the old version so far... (old version didn't prune, just dropped to q-search one ply early.)
edwardyu
Posts: 34
Joined: Mon Nov 17, 2008 6:58 am

Re: Limiting Quiescent Search Depth

Post by edwardyu »

Hi Dr Hyatt,

You said Crafty only generate checks in the first ply of QSearch. What about if the Check Evasion is single-reply? Do you then extend the QS by generating checks again in order to find mate combinations?
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Limiting Quiescent Search Depth

Post by hgm »

If I correctly understood Bob's earlier posts on this, Crafty always extends checks in such a way that the evasion is searcheed as full width.

I guess you must do this: it would be pointless to search non-capture checks if you would allow the opponent to stand pat while in check. (As this would give him a guaranteed fail high, because you are only searching the cecks in the first place because standng pat in that node failed low.) And forbidding stand pat without allowing non-capture evasions will produce a false fail-low result in the overwhelming majority of cases. (Where the check was not dangerous.)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Limiting Quiescent Search Depth

Post by bob »

edwardyu wrote:Hi Dr Hyatt,

You said Crafty only generate checks in the first ply of QSearch. What about if the Check Evasion is single-reply? Do you then extend the QS by generating checks again in order to find mate combinations?
No. I tested that but it was worse. I originally allowed N consecutive checks in the q-search, and varied N from 1 to 4. I then added other restrictions such as the second checking ply can be tried only if the opponent has one legal way out of check. But every case was weaker. The only gain from checks in the q-search was that it let me do null-move R=3 everywhere, where I was doing R=3 near the root and R=2 for the last 6-7 plies... The gain was 4-5 Elo max, so it is not a world-beater.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Limiting Quiescent Search Depth

Post by bob »

hgm wrote:If I correctly understood Bob's earlier posts on this, Crafty always extends checks in such a way that the evasion is searcheed as full width.

I guess you must do this: it would be pointless to search non-capture checks if you would allow the opponent to stand pat while in check. (As this would give him a guaranteed fail high, because you are only searching the cecks in the first place because standng pat in that node failed low.) And forbidding stand pat without allowing non-capture evasions will produce a false fail-low result in the overwhelming majority of cases. (Where the check was not dangerous.)
Let's make sure we are talking apples to apples.

In my normal search, I extend when I give a check, which guarantees I will never hit the first ply of q-search while in check, because the move at the previous ply would have been extended.

Once I get to the q-search, I know I am not in check, and I do a normal-qsearch first, including the usual stand-pat option. If no captures produce a fail-high, I then generate only checking moves and try those. While in the first ply of q-search, any checking move will generate a "all legal replies" response at the next ply to verify that the check is not mate. This includes checks in the first group of captures as well as checks that are generated explicitly.

At ply 2 I will try all legal moves to escape check, assuming I am in check, otherwise I do normal captures. Once I go any deeper, it will be captures-only, no further checks are generated, and if a capture is a check, I do not do a full check evasion search as a response...