Fail low after fail high

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
mvk
Posts: 589
Joined: Tue Jun 04, 2013 8:15 pm

Re: Fail low after fail high

Post by mvk » Sun Apr 05, 2015 1:37 pm

hgm wrote:In this scenario, it would be more accurate to take the lower bound obtained by the initial fail high, and use it as if it were an exact score. (If you don't want to do any extended re-re-search, which I expect to be prohibitively expensive, and still no certain cure, as even that could fail low.) This means you might under-estimate the score, but believing the fail low would under-estimate it even more.
Sounds familiar:https://groups.google.com/d/msg/rec.gam ... kpT8qfrhQJ

I do it currently (again), but only for mates and endgame table wins. It helps solve KRK using only forward search, without evaluation, for example.
I don't think if I will reintroduce it for non-mates. It seems safer to trust the research score over the scout score, due to unequal effective search trees.
[Account deleted]

User avatar
hgm
Posts: 23718
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Fail low after fail high

Post by hgm » Sun Apr 05, 2015 2:06 pm

But would it also be safer to trust the shallow search over the deep hash hit? If the deep hash hit would also have been used in the re-search, it might not have failed low, but could have returned the same score as the scout search. And it would have had an even bigger effective tree, as the hash hits grafted a deeper sub-tree onto it as the re-search would have done.

mvk
Posts: 589
Joined: Tue Jun 04, 2013 8:15 pm

Re: Fail low after fail high

Post by mvk » Sun Apr 05, 2015 2:27 pm

Good point. I think the deep hash hit in PV is more of an anomaly in the presence of reductions, and not the regular case. To be sure, it should be tested. I will see if I can try it later this month.
[Account deleted]

bob
Posts: 20555
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Fail low after fail high

Post by bob » Sun Apr 05, 2015 6:11 pm

hgm wrote:In this scenario, it would be more accurate to take the lower bound obtained by the initial fail high, and use it as if it were an exact score. (If you don't want to do any extended re-re-search, which I expect to be prohibitively expensive, and still no certain cure, as even that could fail low.) This means you might under-estimate the score, but believing the fail low would under-estimate it even more.

Of course in general it would be hard to know whether this scenario applies, rather than the null-window search being pruned more. For this reason it would be better to address the problem when it first occurs, and where we can be aware that it occurs:

If a search of depth d is requested, and the position gets a hash hit with draft N > d, and a lower bound L with alpha < L < beta, this would not be usable. So you would do a re-search at depth d. Now suppose that this search fails low, or, even more general, returns a value < L. (Some engines would do the re-search with window {L, beta} after such a hash hit, in which case a score < L would always mean a fail low.)

You would now have to proceed based on contradictory information: one search says score < L, and a deeper search says score >= L. It makes sense to not allow the information obtained by the shallow search to overrule that of a deeper one, so in any case score >= L. The problem is that that deep search doesn't tell you how much larger, and the shallow search refuses to tell you either. The least wrong of all possible evils seems to assume that the score is exactly L: this does not violate the result of the deeper search, and it minimally violates the result of the shallower re-search. A similar policy couldbe used in the opposite case, where an over-deep hash hit gives you an upper bound that is not suitable for the current window, but the shallower search provide a lower bound that is larger.

Propagating these 'consistent' scores towards the root should ameliorate search instability in general.
That is what I do. I get a score of S on the first move. Then using PVS I search with S,S+1 for the remainder of the moves. If I get a fail high, I report it and record the current best score as S+1 and then do the S, S+N (N is the fail-high margin that ramps up by something like 2x for each fail high). If that fails low, the fail-high move is still best, and I continue searching with S+1, S+2 from this point forward. The only drawback is that another fail high leaves me unsure which is best, since the first fail high did not resolve a score. I take simplicity and consider the new fail high move as best, and now search S+2, S+3 with the current best score as S+2... Did a lot of testing on the cluster and this was better than getting the null-window fail high, then a fail low and ignoring the fail high.

Don't forget. In Crafty I do NOT treat PV/non-PV nodes differently, so that the re-search after the fail-high does not reduce less or extend more. This is the very reason I do not do that.

Post Reply