Singular extension

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
hgm
Posts: 22338
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Singular extension

Post by hgm » Fri Jul 17, 2015 9:15 pm

When I was making my mini-Shogi opening book, analyzing all positions along the opening lines to high depth, I noticed there is a staggering number of positions where there is only a single playable move, and every deviation causes a score drop > 1.00. Having 6 or 8 such half-moves in a row is not uncommon, and usually the score from analyzing the start position of such a sequence is substantially different from a similar-depth analysis at the end.

So I realized my search must be all wrong. Or at least sub-optimal. When in a PV node at ply level n it spends a lot of time proving that all moves but the best are worse than a certain score, while this perceived best move might be doomed to lead to a very poor position, actually making it fail low compared to some of the moves that are now rejected, and in any case would require those other moves proven bad compared to a much more tight bound, making the refutation trees used on earlier iterations often inadequate.

When conducting the search of good opening lines by hand, I quickly learned first to follow moves with a promising score for as long they were forced (as seen by multi-PV). Then at least I got a more or less reliable score I could use for testing the side branches against.

In an automated search, a singular extension should do this. I have never used one in any of my engines, though. After some thinking, the following algorithm seems reasonable to me:

At some remaining depth, say 7, rather than searching the non-PV move in PV nodes with the null window at the PV-move score (i.e. regular PVS), I search them with an offsetted null window, bestScore - 90. If any of them fails high, it gets re-searched with the normal PVS null-window, and the remaining moves will also be searched as regular PVS. (I.e., the offset is reset from the initial 90 to 0.) If after searching all moves the offset is still 90, I know the PV move was singular.

In that case, I want to re-search the PV move with an extension (i.e. at 8 ply). For the search window there are two cases:
(1) The original alpha for the node was less than 90 below (non-extended) PV-move score. In that case I keep using the normal search window. If the extended search does not fail low, the score was still above that of all other moves, and I am done. If it fails low, the entire node fails low (as the other moves already failed lower), and I am done as well.
(2) The original alpha was more than 90 below the PV-move score. Now I up alpha to bestScore-90 for the extended PV-move search. If the search does not fail low, the PV move remains best, and I am done. However, if it fails low, one of the other moves might now be best. And I have no idea of the true score of any of the moves. (But I don't want to continue piling extension on extension in the designated PV branch to push the score below the original alpha, which might actually be -INFINITY, when other moves might already be better.)

So basically the whole search has to be attempted again, first searching the PV move non-extended with the original alpha as lower window bound, which will now profit from hash hits of the extended search as far as pushing the score below the old bestScore-90 concerns, but is going to work harder for the opponent to see if he can push the score deeper down towards the original alpha, or even beyond it. (Of course if the returned upper bound from the previous extended fail low of the PV move was already below the original alpha, we can discard the PV move right away, and can continue like normal PVS with a null window at the original alpha for the other moves.) If this again provides a score above alpha, it becomes the new bestScore for the re-search, and we repeat the whole circus. This might again prove that the lowered score of the extended PV-move search is singular, after which the extended re-search of it will again follow, and, by the time it reaches the tips of that branch might have to extend even deeper if more singular moves follow. Of course the PV move might now not be singular, because of the score drop it suffered from extension, or might not even be best. In which case another move now becomes PV move, and will then undergo the same procedure of singularity testing and extension.

Does that sound about like how it should be done?

mvk
Posts: 589
Joined: Tue Jun 04, 2013 8:15 pm

Re: Singular extension

Post by mvk » Fri Jul 17, 2015 9:50 pm

hgm wrote:Does that sound about like how it should be done?
Many options. Instead of going back to the PV move you can also modify the reduction of the further moves. Or just mark the node and extend in the next iteration. Another thing to consider is the effect on the hash table, because you are now writing several types of bounds in it. Considering the granularity of your window and/or the evaluation can mitigate the pain. And you have another overhead because searching with a more difficult bound takes more nodes (which can be countered by reducing depth more btw).
Last edited by mvk on Fri Jul 17, 2015 9:56 pm, edited 4 times in total.
[Account deleted]

bob
Posts: 20348
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Singular extension

Post by bob » Fri Jul 17, 2015 9:52 pm

hgm wrote:When I was making my mini-Shogi opening book, analyzing all positions along the opening lines to high depth, I noticed there is a staggering number of positions where there is only a single playable move, and every deviation causes a score drop > 1.00. Having 6 or 8 such half-moves in a row is not uncommon, and usually the score from analyzing the start position of such a sequence is substantially different from a similar-depth analysis at the end.

So I realized my search must be all wrong. Or at least sub-optimal. When in a PV node at ply level n it spends a lot of time proving that all moves but the best are worse than a certain score, while this perceived best move might be doomed to lead to a very poor position, actually making it fail low compared to some of the moves that are now rejected, and in any case would require those other moves proven bad compared to a much more tight bound, making the refutation trees used on earlier iterations often inadequate.

When conducting the search of good opening lines by hand, I quickly learned first to follow moves with a promising score for as long they were forced (as seen by multi-PV). Then at least I got a more or less reliable score I could use for testing the side branches against.

In an automated search, a singular extension should do this. I have never used one in any of my engines, though. After some thinking, the following algorithm seems reasonable to me:

At some remaining depth, say 7, rather than searching the non-PV move in PV nodes with the null window at the PV-move score (i.e. regular PVS), I search them with an offsetted null window, bestScore - 90. If any of them fails high, it gets re-searched with the normal PVS null-window, and the remaining moves will also be searched as regular PVS. (I.e., the offset is reset from the initial 90 to 0.) If after searching all moves the offset is still 90, I know the PV move was singular.

In that case, I want to re-search the PV move with an extension (i.e. at 8 ply). For the search window there are two cases:
(1) The original alpha for the node was less than 90 below (non-extended) PV-move score. In that case I keep using the normal search window. If the extended search does not fail low, the score was still above that of all other moves, and I am done. If it fails low, the entire node fails low (as the other moves already failed lower), and I am done as well.
(2) The original alpha was more than 90 below the PV-move score. Now I up alpha to bestScore-90 for the extended PV-move search. If the search does not fail low, the PV move remains best, and I am done. However, if it fails low, one of the other moves might now be best. And I have no idea of the true score of any of the moves. (But I don't want to continue piling extension on extension in the designated PV branch to push the score below the original alpha, which might actually be -INFINITY, when other moves might already be better.)

So basically the whole search has to be attempted again, first searching the PV move non-extended with the original alpha as lower window bound, which will now profit from hash hits of the extended search as far as pushing the score below the old bestScore-90 concerns, but is going to work harder for the opponent to see if he can push the score deeper down towards the original alpha, or even beyond it. (Of course if the returned upper bound from the previous extended fail low of the PV move was already below the original alpha, we can discard the PV move right away, and can continue like normal PVS with a null window at the original alpha for the other moves.) If this again provides a score above alpha, it becomes the new bestScore for the re-search, and we repeat the whole circus. This might again prove that the lowered score of the extended PV-move search is singular, after which the extended re-search of it will again follow, and, by the time it reaches the tips of that branch might have to extend even deeper if more singular moves follow. Of course the PV move might now not be singular, because of the score drop it suffered from extension, or might not even be best. In which case another move now becomes PV move, and will then undergo the same procedure of singularity testing and extension.

Does that sound about like how it should be done?
It is not easy to explain SE, so I can't be 100% certain you do what I did when I tested this stuff last year/early this year. But the idea sounds right. PV singular is a bit easier to deal with, and when you read TA's article about DB SE as defined by Hsu, that's pretty straight-forward and not so terribly costly. The worse part is fail-high singular moves, because when you fail high, normally you are done. So how to flag it as singular? Hsu proposed doing an offset window search to a reduced depth as otherwise you are now doing a pure minimax tree if you don't cutoff here.

I implemented this and it is messy as all hell if you decide to really do it right. Search all the other moves with offset and reduced depth. But what if one of those fails high? Now you have the original and this move. The new one could actually be significantly better than the original so you have to somehow resolve that. Or just give up and say "not singular". Same problem at PV nodes. Anything that fails high could be singular and much better than the best found so far. And then there is a trap... an infinite singular search, because you have to make sure that once something is searched as singular, if you have to choose between it and another move, both have to be searched with the singular depth. And another one may pop up. Now all 3 might have to be separated, etc... I even added Hsu's sticky transposition table to remember moves that were flagged as singular so they will be searched as singular for at least the next iteration to avoid massive search inconsistencies (as if we don't have enough of those with LMR).

The idea always sounds good, but I never did better than "break even" and I wasn't keeping a "break even" change that was that messy..

Henk
Posts: 5107
Joined: Mon May 27, 2013 8:31 am

Re: Singular extension

Post by Henk » Fri Jul 17, 2015 10:02 pm

Is it because El Chapo has escaped ?

User avatar
Eelco de Groot
Posts: 3860
Joined: Sun Mar 12, 2006 1:40 am
Location: Groningen

Re: Singular extension

Post by Eelco de Groot » Fri Jul 17, 2015 10:34 pm

hgm wrote:When I was making my mini-Shogi opening book, analyzing all positions along the opening lines to high depth, I noticed there is a staggering number of positions where there is only a single playable move, and every deviation causes a score drop > 1.00. Having 6 or 8 such half-moves in a row is not uncommon, and usually the score from analyzing the start position of such a sequence is substantially different from a similar-depth analysis at the end.

So I realized my search must be all wrong. Or at least sub-optimal. When in a PV node at ply level n it spends a lot of time proving that all moves but the best are worse than a certain score, while this perceived best move might be doomed to lead to a very poor position, actually making it fail low compared to some of the moves that are now rejected, and in any case would require those other moves proven bad compared to a much more tight bound, making the refutation trees used on earlier iterations often inadequate.

When conducting the search of good opening lines by hand, I quickly learned first to follow moves with a promising score for as long they were forced (as seen by multi-PV). Then at least I got a more or less reliable score I could use for testing the side branches against.

In an automated search, a singular extension should do this. I have never used one in any of my engines, though. After some thinking, the following algorithm seems reasonable to me:

At some remaining depth, say 7, rather than searching the non-PV move in PV nodes with the null window at the PV-move score (i.e. regular PVS), I search them with an offsetted null window, bestScore - 90. If any of them fails high, it gets re-searched with the normal PVS null-window, and the remaining moves will also be searched as regular PVS. (I.e., the offset is reset from the initial 90 to 0.) If after searching all moves the offset is still 90, I know the PV move was singular.

In that case, I want to re-search the PV move with an extension (i.e. at 8 ply). For the search window there are two cases:
(1) The original alpha for the node was less than 90 below (non-extended) PV-move score. In that case I keep using the normal search window. If the extended search does not fail low, the score was still above that of all other moves, and I am done. If it fails low, the entire node fails low (as the other moves already failed lower), and I am done as well.
(2) The original alpha was more than 90 below the PV-move score. Now I up alpha to bestScore-90 for the extended PV-move search. If the search does not fail low, the PV move remains best, and I am done. However, if it fails low, one of the other moves might now be best. And I have no idea of the true score of any of the moves. (But I don't want to continue piling extension on extension in the designated PV branch to push the score below the original alpha, which might actually be -INFINITY, when other moves might already be better.)

So basically the whole search has to be attempted again, first searching the PV move non-extended with the original alpha as lower window bound, which will now profit from hash hits of the extended search as far as pushing the score below the old bestScore-90 concerns, but is going to work harder for the opponent to see if he can push the score deeper down towards the original alpha, or even beyond it. (Of course if the returned upper bound from the previous extended fail low of the PV move was already below the original alpha, we can discard the PV move right away, and can continue like normal PVS with a null window at the original alpha for the other moves.) If this again provides a score above alpha, it becomes the new bestScore for the re-search, and we repeat the whole circus. This might again prove that the lowered score of the extended PV-move search is singular, after which the extended re-search of it will again follow, and, by the time it reaches the tips of that branch might have to extend even deeper if more singular moves follow. Of course the PV move might now not be singular, because of the score drop it suffered from extension, or might not even be best. In which case another move now becomes PV move, and will then undergo the same procedure of singularity testing and extension.

Does that sound about like how it should be done?
I had some minor comments.

1) Shogi, from the little I read about it, I think it was Tord who once said it was all much more tactical than chess especially because you can drop pieces? So there are likely more singular positions although I don't know if that also applies to the opening in Shogi, because you don't have pieces to drop yet. But the rules are just different than chess so I don't know.

2) Just glance over how much text you had to write for case (2) above :) Is it not much more natural, if instead of a fixed offset, like Stockfish does too, it does not have a totally fixed offset like your 90 but one that depends on remaining depth, but instead you use alpha of your aspiration search window (aspiration window width in theory newly adapted to your singular extension of course) and everything from (2) reverts to (1)? (Intuitively, but that is what I suspect might be the case). Your 10 line pseudo code (2) shrinks to 2 line pseudo code (1) :D Just kidding, but that seems somehow familiar for me, at least as far as the extension stuff is concerned to one of the details what Rainbow Serpent is doing just slightly different from Stockfish, at least in PV nodes (you only are describing singular extension for PV nodes here), what Bob calls PV singular.

(Rainbow Serpent is not so clever that it does not search the other moves against a new alpha if the new alpha is higher that the original alpha of the aspiration window, like you propose Harm, but that is probably not a significant optimization because stored hash will make the new search very quick (and also, there are not that many PV nodes); it finds the other moves in the PV node all failed low before against a lower alpha and returns a stored upper bound that is sufficient for the new alpha.

It also has no tuned aspiration window for this, it just uses the one it 'inherited' from Stockfish. As long as the aspiration window is small, I don't think you can get much from retuning it for PV Singular Extensions, but I do not actually know for a fact.)
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan

User avatar
hgm
Posts: 22338
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Singular extension

Post by hgm » Sat Jul 18, 2015 9:56 am

Thank everyone for the feedback!

@Marcel:
Indeed, the hash table would require smarter handling, as currently I am using a single-bound table. But while evaluating the idea I plan to simply alter the hash key for the offsetted searches.

I did think implementing it entirely as reductions, but that doesn't seem the correct solution for the situation I am facing, where singular moves typically come in stretches of 6 to 10. I really want the PV to surge ahead in such cases. It is basically a higher-order quiescence search, needed because a (say) 7-ply search does not give a reliable score in singular positions, so that you have to extend all the way to the closest non-singular position to do a 7-ply search there.

@Bob:
Good point about the fail high. But I believe that in my case, where I plan to do the extension only in PV nodes, it would be consistent to take the cutoff without further investigation, as the fail high makes it no longer a PV node.

@Eelco:
Indeed, Shogi seems in general to be much more forcing than Chess. Note that I was building the opening book for mini-Shogi, which is tactical from the beginning, because there is no closed rank of Pawns separating the pieces, (each side has only a single Pawn), so that captures start already after just a few moves. I suspect in regular Shogi there also is a very high fraction of singular positions, as I get an amazingly high ponder-hit precentage when playing my engine against SPEAR, while their evaluations are completely different.

You are right that I use the margin of 90 basically as an aspiration window in the singular PV node, in case the real window was much wider. I never used aspiration in any of my engines, but I guess this is a case where you really could benefit from it. I agree that it would simplify things a lot when aspiration in the root guaranteed that case (2) basically could not occur.

Cardoso
Posts: 275
Joined: Thu Mar 16, 2006 6:39 pm

Re: Singular extension

Post by Cardoso » Mon Jul 27, 2015 7:03 pm

Hi everybody, :)
I also noticed SF doesn't use the current search window for the exclusion search, instead it uses the TT entry value (as well as a depth dependant margin), intuitively it seems more logical to use the current search window minus 'singular_margin', what do you guys think it's better?

Code: Select all

Value rBeta = ttValue - 2 * depth / ONE_PLY;
value = search<NonPV, false>&#40;pos, ss, rBeta - 1, rBeta, depth / 2, cutNode&#41;;
or

Code: Select all

Value margin =  2 * depth / ONE_PLY;
value = search<NonPV, false>&#40;pos, ss, alpha - margin, beta - margin, depth / 2, cutNode&#41;
Alvaro

User avatar
Eelco de Groot
Posts: 3860
Joined: Sun Mar 12, 2006 1:40 am
Location: Groningen

Re: Singular extension

Post by Eelco de Groot » Tue Jul 28, 2015 3:33 am

Cardoso wrote:Hi everybody, :)
I also noticed SF doesn't use the current search window for the exclusion search, instead it uses the TT entry value (as well as a depth dependant margin), intuitively it seems more logical to use the current search window minus 'singular_margin', what do you guys think it's better?

Code: Select all

Value rBeta = ttValue - 2 * depth / ONE_PLY;
value = search<NonPV, false>&#40;pos, ss, rBeta - 1, rBeta, depth / 2, cutNode&#41;;
or

Code: Select all

Value margin =  2 * depth / ONE_PLY;
value = search<NonPV, false>&#40;pos, ss, alpha - margin, beta - margin, depth / 2, cutNode&#41;
Alvaro
Hello Alvaro, let me try to describe how this is done a bit different in the RS singular Extension than in Stockfish. It is all on GitHub probably easier to see it there. Apart from this most code is just exactly like Stockfish.
  • Rainbow Serpent just uses alpha, both in NonPV and in PV nodes. in Non PV nodes alpha is just a little below ttValue usually, so the difference (with Stockfish) is not so great there. But the idea then in RS is to use a Fail High above alpha for MultiCut pruning.
  • in your example, beta - margin is alpha - margin + 1 in NonPV nodes. But in PV nodes, using a non zero window with NonPV search is a bit tricky, ill defined. But I assume you would want it to be alpha - margin + 1

    Testing below alpha, I think introduces the problems described by Harm though. Also, in Stockfish, Exclusion searchdepth is only half of the regular depth so it is probably not precise enough for MultiCut pruning. In PV nodes you could use a higher depth because all moves will be tested anyway. But if you use a margin below alpha, it seems not so useful. And Stockfish just uses half depth in PV nodes too, unlike what Harm was proposing. Rainbow Serpent uses the full depth. (But no MultiCut in PV nodes, I have not tried that. It is not impossible I think, and Bob always wants to treat PV nodes just like any other node :) )
  • Downside of using alpha in PV nodes is what to do after you had a Fail High in the PV node, so now a new move gets tested but especially in the in the grandchild node below, so with the same player to move, alpha beta window is now much smaller usually with alpha very close to the value you got for the Fail High move two nodes below. (At least as far as I can see, it would probably help to draw an actual example of this.) So if you test the other moves against alpha, (instead of against the fixed margin below ttValue) it is more difficult to Fail High and the first move tried now has a big chance of being classified as singular. Not a very big problem but it makes for deep searches with lots of Singular Extensions in the PV. It's a bit less controlled than the way Stockfish searches the PV and for games I think, this is harder to tune right. But still, the alternative of testing against a margin below alpha does not feel right to me. But it works well enough in practice of course.
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan

D Sceviour
Posts: 330
Joined: Mon Jul 20, 2015 3:06 pm
Contact:

Re: Singular extension

Post by D Sceviour » Mon Oct 26, 2015 3:19 pm

I tried Singular Extension for the first time. Surprisingly it is not that difficult to code, and there are several modern examples available. What is astounding is the less than 2% hit rate for singular extensions. The extra exclusion search should produce a decrease in overall strength. Either the singular hit is very successful (as it sometimes indicates in tactical test positions), or the exclusion search is so shallow and fast that delay is un-noticeable. The exclusion search would return quickly on hash hits with the reduced depth. So far, the results seem to accomplish no overall gain (or loss) in tests. Better extension conditions seem necessary.

The unanswered question is - why extend at all on fail-high nodes?

User avatar
hgm
Posts: 22338
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Singular extension

Post by hgm » Mon Oct 26, 2015 3:46 pm

D Sceviour wrote:The exclusion search would return quickly on hash hits with the reduced depth.
This is not necessarily true: because of the shifted bounds, the exclusion search will tend to fail high for moves that would fail low in the normal search with unshifted window. (If we don't pick very bad moves first. Once one of the alternate moves have failed high on the shifted window, we already know the PV move was non-singular, and there is no need to do further exclusion searches.)

So they have completely different trees, with very little overlap. The exclusion search will hit upon a node of the previous iteration of the normal search in its root, but because of the shifted window it will not have usable bounds. So it will have to search, and then overwrite the entry with its low-depth result, making it unusable for the normal search. This will typically happen in all nodes they have in common. This is why I thought it could be better to just 'shift' the hash key by a constant when working on exclusion searches.

Post Reply