Where'd you get that from? Delta pruning and futility pruning are identical, except delta pruning is in quiescence, and futility is in the main search. Futility pruning really is pruning too: http://people.csail.mit.edu/heinz/dt/node23.htmlbob wrote:futility pruning is a different animal and it is not really a "pruning" algorithm so much as it is a "reduction" algorithm. The two things are completely unrelated...
Limiting Quiescent Search Depth
Moderators: hgm, Rebel, chrisw
-
- Posts: 1922
- Joined: Thu Mar 09, 2006 12:51 am
- Location: Earth
Re: Limiting Quiescent Search Depth
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Limiting Quiescent Search Depth
Futility pruning, as defined by Heinz and others, is not "pruning". It is a reduction. You just lop off the last ply and drop right into the q-search, but by checking the current material score + a window, you know that doing this will cause a stand-pat cutoff so you just quit. This is only done at depth=1 nodes where the next level will be a q-search node. Extended futility pruning backs this up one additional ply and does a similar thing at depth=2, which is 2 plies away from the q-search, and we reduce by one ply if the conditions are satisfied and drop right into the q-search.Zach Wegner wrote:Where'd you get that from? Delta pruning and futility pruning are identical, except delta pruning is in quiescence, and futility is in the main search. Futility pruning really is pruning too: http://people.csail.mit.edu/heinz/dt/node23.htmlbob wrote:futility pruning is a different animal and it is not really a "pruning" algorithm so much as it is a "reduction" algorithm. The two things are completely unrelated...
I don't consider either of these to be real "pruning" algorithms, and I particularly dislike the term "delta pruning" which someone coined after looking at Crafty's source because I used a variable "delta" to indicate what kind of material gain is necessary to make a move worth searching.
Futility pruning is worth nothing in today's programs thanks to LMR and null-move search. Null-move drops most positions right into the q-search before you even get a chance to try futility, and then LMR also reduces things like mad.
I'd personally consider both ideas to be optimizations rather than pruning algorithms. No point in searching a move that is going to produce a stand-pat fail high, since that will be a fail-low one ply earlier. And that is what both do, except futility is only applied at depth=1 nodes while the "delta" idea is applied at all depth=0 nodes.
-
- Posts: 10280
- Joined: Thu Mar 09, 2006 12:37 am
- Location: Tel-Aviv Israel
Re: Limiting Quiescent Search Depth
The purpose of the qsearch is to improve the playing strength of the engine and not to test all possible capture sequences.JVMerlino wrote:Here's your Newbie question of the week(tm):
The Chessprogramming Wiki site states that in a quiescent search you should "limit capture search to some depth". But doesn't doing so defeat the purpose of the q-search, which is to test all possible capture sequences out to their conclusion?
While it's true that choosing a depth limit of, for example, 10 plies probably only fails in a very small percentage of cases, it still seems like a potentially dangerous idea to me.
Thoughts?
Many thanks in advance,
jm
Testing all possible capture sequences can be dangerous in the worst case because in theory you may need million of nodes to do it.
Practically cases when you need millions of nodes for a single qsearch almost never happen but I see no reason to think that limiting the qsearch is more dangerous than not limiting it.
I think that the practical difference between limiting the qsearch to 7 plies and returning score based on SEE and not limiting it is very small and I guess that if bob test it then he is going to find a difference of less than 10 elo.
Uri
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Limiting Quiescent Search Depth
That would actually be an interesting test to run and I might give this a try before long to see what it does to NPS vs actual playing strength.Uri Blass wrote:The purpose of the qsearch is to improve the playing strength of the engine and not to test all possible capture sequences.JVMerlino wrote:Here's your Newbie question of the week(tm):
The Chessprogramming Wiki site states that in a quiescent search you should "limit capture search to some depth". But doesn't doing so defeat the purpose of the q-search, which is to test all possible capture sequences out to their conclusion?
While it's true that choosing a depth limit of, for example, 10 plies probably only fails in a very small percentage of cases, it still seems like a potentially dangerous idea to me.
Thoughts?
Many thanks in advance,
jm
Testing all possible capture sequences can be dangerous in the worst case because in theory you may need million of nodes to do it.
Practically cases when you need millions of nodes for a single qsearch almost never happen but I see no reason to think that limiting the qsearch is more dangerous than not limiting it.
I think that the practical difference between limiting the qsearch to 7 plies and returning score based on SEE and not limiting it is very small and I guess that if bob test it then he is going to find a difference of less than 10 elo.
Uri
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Limiting Quiescent Search Depth
I am first testing the worst-case which is that in Crafty, QuiesceChecks() is called normally, and then QuiesceEvasions() is called for any checks searched, then Quiesce() simply returns static eval + SEE for capturing last piece opponent moved. So far it is about 50 Elo worse but the test has a couple of hours to go to reach 32K games.bob wrote:That would actually be an interesting test to run and I might give this a try before long to see what it does to NPS vs actual playing strength.Uri Blass wrote:The purpose of the qsearch is to improve the playing strength of the engine and not to test all possible capture sequences.JVMerlino wrote:Here's your Newbie question of the week(tm):
The Chessprogramming Wiki site states that in a quiescent search you should "limit capture search to some depth". But doesn't doing so defeat the purpose of the q-search, which is to test all possible capture sequences out to their conclusion?
While it's true that choosing a depth limit of, for example, 10 plies probably only fails in a very small percentage of cases, it still seems like a potentially dangerous idea to me.
Thoughts?
Many thanks in advance,
jm
Testing all possible capture sequences can be dangerous in the worst case because in theory you may need million of nodes to do it.
Practically cases when you need millions of nodes for a single qsearch almost never happen but I see no reason to think that limiting the qsearch is more dangerous than not limiting it.
I think that the practical difference between limiting the qsearch to 7 plies and returning score based on SEE and not limiting it is very small and I guess that if bob test it then he is going to find a difference of less than 10 elo.
Uri
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: Limiting Quiescent Search Depth
I have not found that to be the case. Does Crafty not do futility pruning?bob wrote: Futility pruning is worth nothing in today's programs thanks to LMR and null-move search. Null-move drops most positions right into the q-search before you even get a chance to try futility, and then LMR also reduces things like mad.
Actually, this is something I tested recently and I was astonished by how much speed it gave me. I also lose a lot of quality, but the carefully measured the tradeoff and it's WELL worth doing.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Limiting Quiescent Search Depth
Yes it does. I found it to be worth roughly _two_ Elo over 80,000 games. The entire AEL idea from Heinz is worth about 12 Elo total. This measured in a program that does null-move R=3 + LMR already.Don wrote:I have not found that to be the case. Does Crafty not do futility pruning?bob wrote: Futility pruning is worth nothing in today's programs thanks to LMR and null-move search. Null-move drops most positions right into the q-search before you even get a chance to try futility, and then LMR also reduces things like mad.
I didn't measure speed. I just played almost 100,000 games with and without to see the effect... It was really minimal.
Actually, this is something I tested recently and I was astonished by how much speed it gave me. I also lose a lot of quality, but the carefully measured the tradeoff and it's WELL worth doing.
-
- Posts: 2250
- Joined: Wed Mar 08, 2006 8:47 pm
- Location: Hattingen, Germany
Re: Limiting Quiescent Search Depth
Nope, see Heinz' pseudo code in Scientific Articles in Books and Journals, 8. Extended futility pruning (postscript), page 10.bob wrote:Futility pruning, as defined by Heinz and others, is not "pruning". It is a reduction. You just lop off the last ply and drop right into the q-search, but by checking the current material score + a window, you know that doing this will cause a stand-pat cutoff so you just quit. This is only done at depth=1 nodes where the next level will be a q-search node. Extended futility pruning backs this up one additional ply and does a similar thing at depth=2, which is 2 plies away from the q-search, and we reduce by one ply if the conditions are satisfied and drop right into the q-search.Zach Wegner wrote:Where'd you get that from? Delta pruning and futility pruning are identical, except delta pruning is in quiescence, and futility is in the main search. Futility pruning really is pruning too: http://people.csail.mit.edu/heinz/dt/node23.htmlbob wrote:futility pruning is a different animal and it is not really a "pruning" algorithm so much as it is a "reduction" algorithm. The two things are completely unrelated...
He skips moves in Futility pruning at depth == 1 nodes, which he calls frontier nodes, and in Extended Futility pruning at depth == 2 (pre-frontier nodes). With "limited razoring", he decrements the depth by one at depth == 3 nodes (pre-pre frontier nodes), after that the Extended Futility condition is true and appropriate moves are skipped as well.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Limiting Quiescent Search Depth
Yes, but read my explanation. You have one more move before you drop to the q-search. Rather than making it and then calling the q-search, you just ask "will this produce a stand-pat fail high if I try it and advance to the next ply?" and if the answer is "yes" you just save the effort of making/unmaking the move. It really isn't pruning as I would define it. The only "risk" is that the move you don't make will also have an influence on the stand-pat score, so you toss in an error_margin type value to hopefully account for that.Gerd Isenberg wrote:Nope, see Heinz' pseudo code in Scientific Articles in Books and Journals, 8. Extended futility pruning (postscript), page 10.bob wrote:Futility pruning, as defined by Heinz and others, is not "pruning". It is a reduction. You just lop off the last ply and drop right into the q-search, but by checking the current material score + a window, you know that doing this will cause a stand-pat cutoff so you just quit. This is only done at depth=1 nodes where the next level will be a q-search node. Extended futility pruning backs this up one additional ply and does a similar thing at depth=2, which is 2 plies away from the q-search, and we reduce by one ply if the conditions are satisfied and drop right into the q-search.Zach Wegner wrote:Where'd you get that from? Delta pruning and futility pruning are identical, except delta pruning is in quiescence, and futility is in the main search. Futility pruning really is pruning too: http://people.csail.mit.edu/heinz/dt/node23.htmlbob wrote:futility pruning is a different animal and it is not really a "pruning" algorithm so much as it is a "reduction" algorithm. The two things are completely unrelated...
He skips moves in Futility pruning at depth == 1 nodes, which he calls frontier nodes, and in Extended Futility pruning at depth == 2 (pre-frontier nodes). With "limited razoring", he decrements the depth by one at depth == 3 nodes (pre-pre frontier nodes), after that the Extended Futility condition is true and appropriate moves are skipped as well.
And he is not "skipping the moves" if you look at his code for the other versions, he just reduces the depth by 1 ply, he never avoids searching them, at least not in the copy of his book that I used when implementing this stuff in Crafty when I tested it last year.
-
- Posts: 2250
- Joined: Wed Mar 08, 2006 8:47 pm
- Location: Hattingen, Germany
Re: Limiting Quiescent Search Depth
Of course futility pruning (depth == 1) only saves make/unmake (and (lazy) eval), nevertheless it is forward-pruning per definition, even more extended futility pruning and limited razoring. This is Heinz' code:bob wrote: Yes, but read my explanation. You have one more move before you drop to the q-search. Rather than making it and then calling the q-search, you just ask "will this produce a stand-pat fail high if I try it and advance to the next ply?" and if the answer is "yes" you just save the effort of making/unmaking the move. It really isn't pruning as I would define it. The only "risk" is that the move you don't make will also have an influence on the stand-pat score, so you toss in an error_margin type value to hopefully account for that.
And he is not "skipping the moves" if you look at his code for the other versions, he just reduces the depth by 1 ply, he never avoids searching them, at least not in the copy of his book that I used when implementing this stuff in Crafty when I tested it last year.
Code: Select all
int search(int alpha, int beta, int depth, int move, node parent) {
node current;
int extend, fmax, fprune, fscore, score, selective;
/* execute the opponent's move and initialize some local variables */
make move(parent, move, ¤t);
fprune = selective = 0;
score = -INFINITY;
/* determine if and how to extend the search at the current node */
extend = extensions(depth, move, current);
depth += extend;
/* decide about limited razoring at pre-pre-frontier nodes */
fscore = (mat balance(current) + razor margin);
if (!extend && (depth==PRE_PRE_FRONTIER) && (fscore <= alpha)
&& (opposing pieces(current) > 3)) depth = PRE_FRONTIER;
/* decide about extended futility pruning at pre-frontier nodes */
fscore = (mat balance(current) + extd futil margin);
if (!extend && (depth == PRE_FRONTIER) && (fscore <= alpha)) {
fprune = selective = 1;
score = fmax = fscore;
}
/* decide about selective futility pruning at frontier nodes */
fscore = (mat balance(current) + futil margin);
if (!check(move) && (depth == FRONTIER) && (fscore <= alpha)) {
fprune = selective = 1;
score = fmax = fscore;
}
/* "selective == 1" and "-INFINITY < score <= alpha" hold if
* the search will be selective at the current node.
*
* continue like normal but prune all futile moves if "fprune == 1"
*/
...
for (move= first(current), (move!=0), move= next(current,move))
if (!fprune || check(move) /* recursive search call */
|| (fmax + mat gain(move) > alpha)) {...}
...
}
Figure 1: Search with Extended Futility Pruning and Limited Razoring.