Implications of Lazy eval on Don Beal effect in Fail Soft

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Implications of Lazy eval on Don Beal effect in Fail Sof

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:Is it true that lazy eval does not perform well in Fail-Soft ? And if so why doesn't that hold for futility pruning and isn't that a good reason to switch over to Fail-Hard ?
I don't see why fail-soft or fail-hard would matter. Yes, lazy eval might not give you a value quite as far outside the A-B window as a straight eval, but alpha is good enough to cause a cutoff. I don't think fail-soft helps anything at all until you start to use mtd(f).
Fail-Soft gives higher lower bounds and lower upper bounds. Why doesn't that help ?
Because anything >= beta or <= beta causes a cutoff. Doesn't matter how much above or below. If you use PVS as almost everyone does, 99.9% of the tree is searched with a bound alpha, alpha+1. Whether you get a backed up value of alpha+1 or alpha+1000 doesn't help a bit. The only benefit I have seen is in mtd(f) where a backed up value outside the window gives you a better estimate on how far to relax the bound that failed...
Aha. I only don't use PVS in QuiescenceSearch because I don't have a first best move there.

There are also some entries in hash that are used and cause early cutoffs.
Good first move doesn't mean a thing here. A narrower window will produce a cutoff quicker than a wider one, regardless of move ordering, assuming it remains the same for either window...
searching a null-window from the previous ply??? I would assume you use a null-window there as well???
I don't understand what it says. Good first move doesn't mean a thing where ?

In normal search I use PVS everywhere. Move ordering is important to prevent researches. If current window is a null window it is still important to apply a move that gives a cutoff as soon as possible. So move ordering is also important in a null window search.

Good move ordering is important everywhere in both normal search and QuiescenceSearch.
The point was this: Whether you do a search with alpha, alpha+1, or alpha and alpha+100, the first move won't matter if it is the same in both of those cases. Almost all of the tree is searched with a null-window, which means this fail-soft/fail-hard stuff is not going to have any effect. If something is == alpha, it is a fail low, if it is < alpha, it is STILL a fail low. So it doesn't really do anything inside the search, except that when you fail high or fail low, you have a VERY rough estimate of how far that window needs to be relaxed. Commonly used in mtd(f), but doesn't do anything for normal search.
What about the entries in transposition table. Don't you get more cutoffs in fail-soft when looking up positions in transposition table for they have better lower and upper bounds or is that effect negligible too.
If the tree is searched with A, A+1 then scores outside the window are no better than scores on the edge of the window. That's the key. EXACT hashes from one iteration to the next are pretty rare since the depth is off by one, of course. And within an iteration, alpha/beta are generally constant.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Implications of Lazy eval on Don Beal effect in Fail Sof

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:Is it true that lazy eval does not perform well in Fail-Soft ? And if so why doesn't that hold for futility pruning and isn't that a good reason to switch over to Fail-Hard ?
I don't see why fail-soft or fail-hard would matter. Yes, lazy eval might not give you a value quite as far outside the A-B window as a straight eval, but alpha is good enough to cause a cutoff. I don't think fail-soft helps anything at all until you start to use mtd(f).
Fail-Soft gives higher lower bounds and lower upper bounds. Why doesn't that help ?
Because anything >= beta or <= beta causes a cutoff. Doesn't matter how much above or below. If you use PVS as almost everyone does, 99.9% of the tree is searched with a bound alpha, alpha+1. Whether you get a backed up value of alpha+1 or alpha+1000 doesn't help a bit. The only benefit I have seen is in mtd(f) where a backed up value outside the window gives you a better estimate on how far to relax the bound that failed...
Aha. I only don't use PVS in QuiescenceSearch because I don't have a first best move there.

There are also some entries in hash that are used and cause early cutoffs.
Good first move doesn't mean a thing here. A narrower window will produce a cutoff quicker than a wider one, regardless of move ordering, assuming it remains the same for either window...
searching a null-window from the previous ply??? I would assume you use a null-window there as well???
I don't understand what it says. Good first move doesn't mean a thing where ?

In normal search I use PVS everywhere. Move ordering is important to prevent researches. If current window is a null window it is still important to apply a move that gives a cutoff as soon as possible. So move ordering is also important in a null window search.

Good move ordering is important everywhere in both normal search and QuiescenceSearch.
The point was this: Whether you do a search with alpha, alpha+1, or alpha and alpha+100, the first move won't matter if it is the same in both of those cases. Almost all of the tree is searched with a null-window, which means this fail-soft/fail-hard stuff is not going to have any effect. If something is == alpha, it is a fail low, if it is < alpha, it is STILL a fail low. So it doesn't really do anything inside the search, except that when you fail high or fail low, you have a VERY rough estimate of how far that window needs to be relaxed. Commonly used in mtd(f), but doesn't do anything for normal search.
What about the entries in transposition table. Don't you get more cutoffs in fail-soft when looking up positions in transposition table for they have better lower and upper bounds or is that effect negligible too.
If the tree is searched with A, A+1 then scores outside the window are no better than scores on the edge of the window. That's the key. EXACT hashes from one iteration to the next are pretty rare since the depth is off by one, of course. And within an iteration, alpha/beta are generally constant.
If hash score of the same position at the same depth is not exact but a lower or upper bound, then in Fail Soft there is still an advantage that a lower upper bound or a higher lower bound can be used to update alpha or beta when search is started.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Implications of Lazy eval on Don Beal effect in Fail Sof

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:Is it true that lazy eval does not perform well in Fail-Soft ? And if so why doesn't that hold for futility pruning and isn't that a good reason to switch over to Fail-Hard ?
I don't see why fail-soft or fail-hard would matter. Yes, lazy eval might not give you a value quite as far outside the A-B window as a straight eval, but alpha is good enough to cause a cutoff. I don't think fail-soft helps anything at all until you start to use mtd(f).
Fail-Soft gives higher lower bounds and lower upper bounds. Why doesn't that help ?
Because anything >= beta or <= beta causes a cutoff. Doesn't matter how much above or below. If you use PVS as almost everyone does, 99.9% of the tree is searched with a bound alpha, alpha+1. Whether you get a backed up value of alpha+1 or alpha+1000 doesn't help a bit. The only benefit I have seen is in mtd(f) where a backed up value outside the window gives you a better estimate on how far to relax the bound that failed...
Aha. I only don't use PVS in QuiescenceSearch because I don't have a first best move there.

There are also some entries in hash that are used and cause early cutoffs.
Good first move doesn't mean a thing here. A narrower window will produce a cutoff quicker than a wider one, regardless of move ordering, assuming it remains the same for either window...
searching a null-window from the previous ply??? I would assume you use a null-window there as well???
I don't understand what it says. Good first move doesn't mean a thing where ?

In normal search I use PVS everywhere. Move ordering is important to prevent researches. If current window is a null window it is still important to apply a move that gives a cutoff as soon as possible. So move ordering is also important in a null window search.

Good move ordering is important everywhere in both normal search and QuiescenceSearch.
The point was this: Whether you do a search with alpha, alpha+1, or alpha and alpha+100, the first move won't matter if it is the same in both of those cases. Almost all of the tree is searched with a null-window, which means this fail-soft/fail-hard stuff is not going to have any effect. If something is == alpha, it is a fail low, if it is < alpha, it is STILL a fail low. So it doesn't really do anything inside the search, except that when you fail high or fail low, you have a VERY rough estimate of how far that window needs to be relaxed. Commonly used in mtd(f), but doesn't do anything for normal search.
What about the entries in transposition table. Don't you get more cutoffs in fail-soft when looking up positions in transposition table for they have better lower and upper bounds or is that effect negligible too.
If the tree is searched with A, A+1 then scores outside the window are no better than scores on the edge of the window. That's the key. EXACT hashes from one iteration to the next are pretty rare since the depth is off by one, of course. And within an iteration, alpha/beta are generally constant.
If hash score of the same position at the same depth is not exact but a lower or upper bound, then in Fail Soft there is still an advantage that a lower upper bound or a higher lower bound can be used to update alpha or beta when search is started.
I don't follow. First, there is no "update alpha" in a null-window search, which is what most of the tree looks like in PVS. If you increase alpha, you will instantly fail high, and vice-versa. When you start a search, how can you update anything when the only hash entries you have are from a search that is 1 ply shallower?

All I can say is that I tested it extensively in my code, and it offered exactly nothing. It might well help you, I can't address that. But in the day of PVS null-window searching, about the only help I can visualize is that when you fail high or low, you have at least a hint of a minimum distance to shift that search bound so that you won't fail high or low a second time...
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Implications of Lazy eval on Don Beal effect in Fail Sof

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:Is it true that lazy eval does not perform well in Fail-Soft ? And if so why doesn't that hold for futility pruning and isn't that a good reason to switch over to Fail-Hard ?
I don't see why fail-soft or fail-hard would matter. Yes, lazy eval might not give you a value quite as far outside the A-B window as a straight eval, but alpha is good enough to cause a cutoff. I don't think fail-soft helps anything at all until you start to use mtd(f).
Fail-Soft gives higher lower bounds and lower upper bounds. Why doesn't that help ?
Because anything >= beta or <= beta causes a cutoff. Doesn't matter how much above or below. If you use PVS as almost everyone does, 99.9% of the tree is searched with a bound alpha, alpha+1. Whether you get a backed up value of alpha+1 or alpha+1000 doesn't help a bit. The only benefit I have seen is in mtd(f) where a backed up value outside the window gives you a better estimate on how far to relax the bound that failed...
Aha. I only don't use PVS in QuiescenceSearch because I don't have a first best move there.

There are also some entries in hash that are used and cause early cutoffs.
Good first move doesn't mean a thing here. A narrower window will produce a cutoff quicker than a wider one, regardless of move ordering, assuming it remains the same for either window...
searching a null-window from the previous ply??? I would assume you use a null-window there as well???
I don't understand what it says. Good first move doesn't mean a thing where ?

In normal search I use PVS everywhere. Move ordering is important to prevent researches. If current window is a null window it is still important to apply a move that gives a cutoff as soon as possible. So move ordering is also important in a null window search.

Good move ordering is important everywhere in both normal search and QuiescenceSearch.
The point was this: Whether you do a search with alpha, alpha+1, or alpha and alpha+100, the first move won't matter if it is the same in both of those cases. Almost all of the tree is searched with a null-window, which means this fail-soft/fail-hard stuff is not going to have any effect. If something is == alpha, it is a fail low, if it is < alpha, it is STILL a fail low. So it doesn't really do anything inside the search, except that when you fail high or fail low, you have a VERY rough estimate of how far that window needs to be relaxed. Commonly used in mtd(f), but doesn't do anything for normal search.
What about the entries in transposition table. Don't you get more cutoffs in fail-soft when looking up positions in transposition table for they have better lower and upper bounds or is that effect negligible too.
If the tree is searched with A, A+1 then scores outside the window are no better than scores on the edge of the window. That's the key. EXACT hashes from one iteration to the next are pretty rare since the depth is off by one, of course. And within an iteration, alpha/beta are generally constant.
If hash score of the same position at the same depth is not exact but a lower or upper bound, then in Fail Soft there is still an advantage that a lower upper bound or a higher lower bound can be used to update alpha or beta when search is started.
I don't follow. First, there is no "update alpha" in a null-window search, which is what most of the tree looks like in PVS. If you increase alpha, you will instantly fail high, and vice-versa. When you start a search, how can you update anything when the only hash entries you have are from a search that is 1 ply shallower?
Yes ok update alpha or beta in a null window would mean fail high or fail low so an update of these variables will not happen within a null window node search.

[By the way with 'search is started' in previous post I meant search is started for that (intermediate) node. But an update of alpha or beta that can only be in these 'non null window nodes']
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Implications of Lazy eval on Don Beal effect in Fail Sof

Post by Henk »

Correction: update of beta will never happen in any node.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Implications of Lazy eval on Don Beal effect in Fail Sof

Post by bob »

Henk wrote:Correction: update of beta will never happen in any node.
Note that if you update alpha anywhere, you update beta at successive plies.