lkaufman wrote:Eelco de Groot wrote:rvida wrote:lkaufman wrote:One of the ideas in the Ippolit based programs (and in Critter) is that when doing the test for Singularity (for Singular Extension), the margin is proportional to depth. Aside from whether this tests well or not, can anyone explain the rationale for this idea? It is not at all obvious to us why a deeper search should require a larger margin. One might even argue that move ordering gets better with depth, so a smaller margin might be needed at higher depths.
Original purpose was to limit the overhead of the exclusion search. Larger margin causes it to fail-high faster on non-singular moves. At lower depths the overhead is smaller and a smaller margin is affordable.
But with other improvements in Critter's search this became (almost) a non-issue and now I use a fixed margin.
I vote for Richard's answer in this thread
My own rationale has changed a bit as well. The main purpose I see for the singular extension is that the singular move has to
fail low i.e. after a deeper search the move no longer produces a beta cutoff so of course no longer being singular either. Only then it will have had some value to search the hashmove deeper. But if in the position where you test for singularity, there are at least two moves that give a beta cutoff, even if the first one fails the second one may still take over. So in that case there is little point in a singular extension. Basically this means you do not have to have a margin at all, it makes more sense, theoretically that is, just to be sure there are two moves that fail high against beta. Next factor is the nullwindow results of the singularity test degrade very quickly like all nullwindowsearches, that the nullwindow result against the best move is better than some margin is not worth very much in precision because you use very shallow searches (typically half depth) but also if these go deeper the lines become less accurate (the search 'degrades' so to speak and should drop below beta further if they can't reach it in the first place because of imprecise moves) and the hashresult itself is not fixed so you test against a moving target.
The larger margin then is there because you would still like to use this very imprecise nullwindow result even if it has degraded. But other than being able to use degraded nullwindow searches I don't think the margin has a real meaning, the tuning depends more on how fast your nullwindow searches degade in accuracy, not on some actual distance to beta or even more imprecise (distance to) the hashresult of the stored beta-cutoff.
Regards, Eelco
I'm not completely following your explanation about degraded nullwindow searches, but my own interpretation of it would be that if you reduce the search depth (for the singular test) by a percentage of depth remaining rather than by a constant number of plies, you need a larger margin due to the increased gap between singular depth and actual depth. Is that what you are saying? If so it sounds right.
Yes, I think that is another reasoning for an increased margin. But I am just not sure that assuming the margin
does work this way, at least for Stockfish, and I am prety sure this
is tested although I have no idea myself what other variations were tested against it, I am just just not so sure that why we think we are doing this is actually the reason why it works. I have posted about this some time before, maybe I am not expressing myself very clearly now but I am just trying to say it in different words; In my opinion this is not just about extending the singular move but about improving the quality of the tree, in case you would have to fall back on another move than the singular move. So for instance if you use a half depth search, it is surely very imprecise as Bob is saying, but it is in line with what you do in Internal Iterative Deepening, and it is about the quality of the search tree that is improved behind the moves that are
not the singular move, and it pays off if the singular move fails low against beta.
Also if you do
not test against beta but against the value of the hash move, some reasoning for that is possible too. At very small depths the result returned of a nullwindow search against beta is just a lower limit, assuming it is a cut node and the hash move fails high against alpha (>= beta), but it is also reasonably correlated with the static evaluation. Because the depth is so small. So even if the static evaluation has some systematic error, the same systematic error is probably also present after a small search from other moves in that node. So to find a move that is positionally close (to the best move found so far in this CUT node) it is probably better not to test against beta but against something that is closer to a real positional evaluation (i.e. in this case the hashresult), even if alpha-beta technically it is just a lower bound above beta. After eight plies or so, I fear that this actually does not work anymore. I can't see why it should. The nullwindow test against beta should lost its correlation with the static eval of the position. Does that make any sense?
There seem several things in play here, why I am not so sure this is just about extending the singular move. Also, it has to be tested but I don't believe the way the singular extension is done now, at least for Stockfish, is optimal. But you just have to test it and if what I am saying does not work it is simply yet another theory fit for the dustbin.
Eelco
What do you (and others reading this) think about the question of whether the margin should be from beta, from the hash score, or from some function of both?