If you do LMR on all non-captures, and exempt all captures from it, having an additional recapture extension seems like over-doing it.
The problem with re-capture extensions is that you must use recaptures to refute silly losing captures of the opponent. So you tend to extend many totally idiotic lines. The better your capture is, the more stupid and likely irrelevant the sub-tree you are in will be.
It would make much more sense to reduce all other moves when you have a single capture that brings you very far above the window.
Singular Extensions
Moderator: Ras
-
- Posts: 28350
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Singular Extensions
I did not like the results when I tried this. But Bruce Moreland fell in love with it and it became a part of his search, and he certainly seemed to play well enough. There is a significant cost, since this reduced-depth SE test is done at the start of every last node. The key question is, is there a reduction factor you can use that mitigates the cost, without getting lots of false singular indications or without missing key ones?Daniel Shawul wrote:Overall I find (5) elegant and that is what I tried, but it failed badly.
I will eagerly await your test results on this.
Correct. But I do not believe he tried to categorize a node as ALL, he discovered it by not getting any improvement to alpha. I tried the categorization approach in Cray Blitz, since DTS used the node type to decide whether to split or not, assuming there were no nodes that satisfy the YBW condition. But I found that to not work very well since it is impossible to correctly identify ALL/CUT nodes before you search them with high accuracy.Variations I tried include avoiding singularity tests at ALL nodes.
Seems also that originally Hsu tried PV/CUT signular only. And also avoiding captures from the test as well.
Didn't think to mention it, but none of my crafty experiments extended recaptures, because I already had a recapture extension in the search. That was removed early in cluster testing when it was proved that it hurt Elo rather than helped (not a big loss, but any loss is a loss). So it might be that such a restriction is not correct today, and I plan on testing both ways. I also turned it off in pawn-only endings because it can run amok. Hsu mentioned this, and I did the same in Cray Blitz. But again, now that I can test rather than guess, I'm going to look at this point carefully as well...You mentioned in the past about recaptures being intentionally avoided in the singularity test,
but I do not see anything in your post here. What is the stance?
Depends on the level of accuracy you want. It is possible that the second move is really the one that is singular. First score is +1.0, next few moves fail low on a +.5 upper bound, then one fails high. It might well be +3.0, which would also cause the first move to fail low on the +2.5 beta bound. In CB I did this as Hsu did. In the Crafty version (done at the front of search), when any move failed high on the offset window search, I threw up my hands and returned "0" (no singular move to extend).
About PV-singular,when two moves become "interesting",isn't it better to leave the test ?
I am not sure what you mean by "result". If you are talking about Hsu's "sticky transposition table" idea that remembered a position, and a move that was singtular, so that it will remain singular for consistency, I did not do this in the Crafty version. But I plan on testing the idea, because logically if a position has a singular move somewhere in the tree, that move ought to be singular at other points in the tree where the same position occurs. Yes the depth can vary, but extending the move uselessly might be less expensive than repeating the singular test, particularly if it fails.Trying further search sounds like beating a dead horse. I know at a PV node a singular extension
could help a lot, but if that is the case I would just extend the second move by UNITDEPTH / 2 and
move on..
I do not store the result of singularity test in TT. Because on the next ply a "forced" regular
search is conducted with all of its results stored in TT. That should make it cheap..
The idea behind the sticky table was to provide a place to store this information where it would not easily be replaced. The TT size back then was much more limited than what we see today. But even with a RAM size of 16gb, that gives me 1B normal TT entries. Fastest hardware I have used for Crafty gave a search speed of 100M nodes per second. If I stored q-search, I'd fill that in 10 seconds. As it is, it might take 100 secs to fill it. But you start overwriting way before it is full. You'd like to both (a) avoid the singularity test a second time, since it is not free and (b) if a move is singular here, it ought to be extended on the next iteration as well, for consistency. Even though the singular test might fail with an additional ply of depth to work with. All of this is subject to testing since I have a way to do this quickly and accurately now.Some store the result of a singular search in the regualr TT (by modifying the hash_key) or "sticky TT"
as you mentioned. Is this really necessary ?
Finally I think that (5) is by far the most elegant of all and cheaper IMHO. I already do IID at every
node (sometimes I exclude ALL nodes). So that already gives me something to test. I am infact a little bit
surprized to learn that Hsu/Campbell's original scheme sounds a bit ad hoc. FH-singular,sticky TT ??
Not sure how you are applying "ad hoc" here. You have three choices for a node. You search everything without improving alpha, which is an ALL node using Knuth/Moore terminology. You get score > alpha but < beta, which is a PV node in Knuth/Moore. And finally you get a score >= beta which is a CUT node. He simply used two different tests, because CUT and PV nodes are intrinsically different. On a cut node, when you get a score >= beta, you are normally done. But Hsu did the singular test, if it passed, he did the re-search, and then used that score in place of the original. If it still failed high, he returned, but if not, he would continue to search for another branch that would cause the fail high to happen. PV singular didn't need to use the reduced-depth approach since you are searching each move fully anyway. So it doesn't have as much overhead and doesn't need the reduced-depth searches to control this.
At the time, I found no benefit to _any_ of the ideas. Bruce stuck with (5) and swore by it. I tested it several times but the only thing I saw was (a) overall depth was reduced in order to (b) produce pvs that were longer than normal. But I could not convince myself it played better. But I could not test with the accuracy I can reach today.
Finally you said you tried (6), was that any good ?
Daniel
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Singular Extensions
I am no longer doing that. I reduce captures just like every other move, if they are not searched up front (which requires SEE(capture) >= 0. So losing captures can be reduced just like all other moves.hgm wrote:If you do LMR on all non-captures, and exempt all captures from it, having an additional recapture extension seems like over-doing it.
The problem with re-capture extensions is that you must use recaptures to refute silly losing captures of the opponent. So you tend to extend many totally idiotic lines. The better your capture is, the more stupid and likely irrelevant the sub-tree you are in will be.
It would make much more sense to reduce all other moves when you have a single capture that brings you very far above the window.
As far as your last comment goes, it sounds reasonable. As far as recapture moves, in Cray Blitz we found (as Hsu did) that extending them as singular was a waste. A full-width search necessarily searches all the dumb captures. And extending the move that shows they are bad was not efficient.
I suspect that the basic idea of SE is going to be a good one. But the key is going to be limiting this so that the effect is to see more useful stuff, not blow the tree up so that we see less. I seem to recall another exception was to not do SE on a move that captures the piece moved in the previous ply, for the same reason. You hang a piece, I rip it, I don't need an extra ply to see that this is a good idea.

I'm going to start with idea (5) and the TT (ip*) approach since I have that code handy. I have started the modifications necessary to make both work in current Crafty. I would not be surprised to see this turn into a year-long project since there are so many possible tweaks to the idea, and I can accurately test and measure each and every one of them. Of the two, the TT approach is by far the simplest, and the old code for this is very short. Idea (5) is a bit more complex and since everything has changed inside Crafty, it is a bit more work to make it fly.
Results will be posted as tests are run, so that anyone can offer feedback.
Let me add, my explanations (initial post) are from memory, not from looking at my old code. So I have probably forgotten other details, exclusions, etc. As I test something, I'll try to precisely explain what was done so that it will be clear.
-
- Posts: 28350
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Singular Extensions
I think the idea of SE is that when a refutation of a certain line is based on a single cut-move (or worse, a sequence of such moves), you take an unacceptable risk by relying on it: if on deeper search the position you end up in turns sour, no reasonable plan B exists, and you are in deep ****. So you want to guar against that by already doing the deeper search in advance, so that it is less likely you will get an unpleasant surprise.bob wrote:I suspect that the basic idea of SE is going to be a good one. But the key is going to be limiting this so that the effect is to see more useful stuff, not blow the tree up so that we see less. I seem to recall another exception was to not do SE on a move that captures the piece moved in the previous ply, for the same reason. You hang a piece, I rip it, I don't need an extra ply to see that this is a good idea.![]()
Grabbing a hanging piece does not really fit that picture. It is likely a very profitable undertaking, of course, but even if deeper search reveals the piece was poisoned, no harm is done if you have alternative moves that also bring you above beta. Even if they are less profitable.
So it is not really the difference between the best move and the second best that begs for the extension, but the fact that you have only one move that brings you above (or very near) beta. If he hangs a Queen, and _not_ taking the Queen gets me checkmated, then taking it might still be worth the extension.
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Singular Extensions
Agreed on most points.
the reduced depth search first. If it is there we extend it, otherwise
the search will be cheap due to other TT cutoffs down the tree. In my case
consecutive singular tests are avoided (similar to null move), so if the singular move
gets overwritten at this ply,the reduced depth search will get a lot of cutoffs at ply + 1..
A hypothetical case is if we store/probe tt entries only at odd or even
plies.. I do not think it will make us significantly weaker.
Another example is that of storing the result of a null move search in TT.
Even though I do it now ,I do not think it would matter much if I didn't..
I can do the same to the "excluding-bestmove-singular-search" by xoring the bestmove's key,
but I am reluctant as to its benefit. Same goes for "sticky TT".
I am using (5) so if a singular move is overwritten, we will doI am not sure what you mean by "result". If you are talking about Hsu's "sticky transposition table" idea that remembered a position, and a move that was singular, so that it will remain singular for consistency, I did not do this in the Crafty version. But I plan on testing the idea, because logically if a position has a singular move somewhere in the tree, that move ought to be singular at other points in the tree where the same position occurs. Yes the depth can vary, but extending the move uselessly might be less expensive than repeating the singular test, particularly if it fails.
the reduced depth search first. If it is there we extend it, otherwise
the search will be cheap due to other TT cutoffs down the tree. In my case
consecutive singular tests are avoided (similar to null move), so if the singular move
gets overwritten at this ply,the reduced depth search will get a lot of cutoffs at ply + 1..
A hypothetical case is if we store/probe tt entries only at odd or even
plies.. I do not think it will make us significantly weaker.
Another example is that of storing the result of a null move search in TT.
Even though I do it now ,I do not think it would matter much if I didn't..
I can do the same to the "excluding-bestmove-singular-search" by xoring the bestmove's key,
but I am reluctant as to its benefit. Same goes for "sticky TT".
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Singular Extensions
I think storing everything in the TT has a benefit. If you do a null-search at position P and it fails high, then if you reach position P through a transposition of moves, why do the null-search again (even though it is fairly inexpensive)?Daniel Shawul wrote:Agreed on most points.
I am using (5) so if a singular move is overwritten, we will doI am not sure what you mean by "result". If you are talking about Hsu's "sticky transposition table" idea that remembered a position, and a move that was singular, so that it will remain singular for consistency, I did not do this in the Crafty version. But I plan on testing the idea, because logically if a position has a singular move somewhere in the tree, that move ought to be singular at other points in the tree where the same position occurs. Yes the depth can vary, but extending the move uselessly might be less expensive than repeating the singular test, particularly if it fails.
the reduced depth search first. If it is there we extend it, otherwise
the search will be cheap due to other TT cutoffs down the tree. In my case
consecutive singular tests are avoided (similar to null move), so if the singular move
gets overwritten at this ply,the reduced depth search will get a lot of cutoffs at ply + 1..
A hypothetical case is if we store/probe tt entries only at odd or even
plies.. I do not think it will make us significantly weaker.
Another example is that of storing the result of a null move search in TT.
Even though I do it now ,I do not think it would matter much if I didn't..
I can do the same to the "excluding-bestmove-singular-search" by xoring the bestmove's key,
but I am reluctant as to its benefit. Same goes for "sticky TT".
However, Hsu was more concerned by those fail-high followed by a fail-low scenarios. A move can look singular at depth D, but not at D-1 or D+1. If you extend the move during this iteration, but not the next, you risk that kind of instability. Whether it is an issue or not is difficult to say. From a pure theoretical point of view, his idea makes sense. Whether it makes sense in the practical sense of Elo, only testing can (and will, by the way) say.
(5) is easier, because that is simply a separate procedure called from the top of search, and it's already written. But I am doing the TT approach first since apparently rybka, ip*, robo* all the way to StockFish are using it and believe that it works. Easy enough to test although it will need some tuning since when I tried this, depths were far less than today so things like remaining depth required to do a SE search will likely be different. And the SE margin also will likely not be what it was back then as my positional scores are quite a bit larger than when I wrote that code. I'm hoping to get this test going tonight. In the old code, part of this stuff was done in hash.c, the rest in search.c. I am merging that to put it all into search.c for the time being, to make the changes simpler (if slightly less efficient).
One change I had forgotten about was the requirement to fudge the hash signature when doing the SE search (offset window) because otherwise things won't work. I already know there was a hash hit, which is what gave me the hash move to test for singularity. If I just do a recursive search call, the hash entry which supplied the hash move, but which did not have enough draft (or else a useful bound) might well terminate the SE proof search instantly, and that would be useless. I removed that tiny bit of hashing code when I stopped testing the SE code. I have been looking at the code carefully to see what needs to be added, and where, to support what I was doing back in 1997. It is amazing how significantly things have changed since then, from search, to no separate code for black/white, bits renumbered, data structures changed, hashing changed significantly, etc...
Once I do this, (5) will be interesting because it can SE extend at _any_ node, including an ALL node. I like the symmetry that provides.
Updates as tests complete. Hoping for significant Elo increases, but not really expecting them...
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Singular Extensions
Correction: it was around 1999-2000 that GCP extensively experimented with singular extensions in Crafty. Did you get any feedback from him at the time and/or source code changes?bob wrote:Not sure what you are talking about. This is code that I have from 10-15 years ago when several of us were experimenting with it. I am not quoting how anybody is doing this today, other than to mention that ip*, robo* used the _idea_ that you guys copied. I didn't give a single idea above that I do not already have code for (unfortunately most of it is going to need significant modification since crafty has changed a ton since the 1997 time frame when much of this was written and tested... When I played with this, one limit I tried was to check the TT bound. Didn't seem very useful to do the SE test on a position that returned a bound somewhat lower than current window...mcostalba wrote:Yes, idea (6) is not correctly laid-out, in particular this part is to be removed:bob wrote: Any other comments, corrections, additions or suggestions???
I suggest to better document yourself reading the sources (instead of being told about them) before to start your tests.Code: Select all
If the score appears to be "reasonably better" than the current search bound
Just like the null-move verification search doesn't work for me? And when I comment it out of SF I get a 2-3 elo improvement as well?
BTW if idea (6) doesn't work for you it means is an implementation bug
I'm not aware of how succesful his attempts were.
Seems he also had dug up how Hsu implemented it...
Vincent
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Singular Extensions
No feedback that I can recall. Only detailed conversations during the SE extensions stuff was between Bruce, myself and Dave Kittinger.diep wrote:Correction: it was around 1999-2000 that GCP extensively experimented with singular extensions in Crafty. Did you get any feedback from him at the time and/or source code changes?bob wrote:Not sure what you are talking about. This is code that I have from 10-15 years ago when several of us were experimenting with it. I am not quoting how anybody is doing this today, other than to mention that ip*, robo* used the _idea_ that you guys copied. I didn't give a single idea above that I do not already have code for (unfortunately most of it is going to need significant modification since crafty has changed a ton since the 1997 time frame when much of this was written and tested... When I played with this, one limit I tried was to check the TT bound. Didn't seem very useful to do the SE test on a position that returned a bound somewhat lower than current window...mcostalba wrote:Yes, idea (6) is not correctly laid-out, in particular this part is to be removed:bob wrote: Any other comments, corrections, additions or suggestions???
I suggest to better document yourself reading the sources (instead of being told about them) before to start your tests.Code: Select all
If the score appears to be "reasonably better" than the current search bound
Just like the null-move verification search doesn't work for me? And when I comment it out of SF I get a 2-3 elo improvement as well?
BTW if idea (6) doesn't work for you it means is an implementation bug
I'm not aware of how succesful his attempts were.
Seems he also had dug up how Hsu implemented it...
Vincent
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Singular Extensions
Oh i recall now that i have the year wrong. It was more like 2001 or 2002.bob wrote:No feedback that I can recall. Only detailed conversations during the SE extensions stuff was between Bruce, myself and Dave Kittinger.diep wrote:Correction: it was around 1999-2000 that GCP extensively experimented with singular extensions in Crafty. Did you get any feedback from him at the time and/or source code changes?bob wrote:Not sure what you are talking about. This is code that I have from 10-15 years ago when several of us were experimenting with it. I am not quoting how anybody is doing this today, other than to mention that ip*, robo* used the _idea_ that you guys copied. I didn't give a single idea above that I do not already have code for (unfortunately most of it is going to need significant modification since crafty has changed a ton since the 1997 time frame when much of this was written and tested... When I played with this, one limit I tried was to check the TT bound. Didn't seem very useful to do the SE test on a position that returned a bound somewhat lower than current window...mcostalba wrote:Yes, idea (6) is not correctly laid-out, in particular this part is to be removed:bob wrote: Any other comments, corrections, additions or suggestions???
I suggest to better document yourself reading the sources (instead of being told about them) before to start your tests.Code: Select all
If the score appears to be "reasonably better" than the current search bound
Just like the null-move verification search doesn't work for me? And when I comment it out of SF I get a 2-3 elo improvement as well?
BTW if idea (6) doesn't work for you it means is an implementation bug
I'm not aware of how succesful his attempts were.
Seems he also had dug up how Hsu implemented it...
Vincent
I had big discussios with GCP on it. I remember crafty at the time lost too many plies with it to break even with it. Diep's using SE, some sort of Bruce implementation, however i still want to get rid of 'em as they deteriorate branching factor. I find that principle not ok.
Like objectively after e4 h5 Qxh5 then Rxh5 is a singular extensions.
It gives a cutoff and it is the only move and extending singular captures gives really a big boost in tactical strength. Yet the idea of extending Rxh5 here doesn't appear to be objectively correct to me.
Some versions it worked, then for some years they were turned off, latest setting is turned on - if i remember well.
First working implementation of SE diep had in 1994 in first diep version. I had invented SE myself actually, but not invented a reduction of search depth yet, so they were quite pricey. Well who cares if you just get 4 ply anyway. Which reduction would you want to have if your total search depth after 3 minutes is 5 or 6 ply?
I can remember the joy when they announced a mate in 6 against Bionic in my first tournament - which Bionic didn't see very quickly of course

Vincent
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Singular Extensions
What was bionic? Only one I remember was Bionic Impakt, which was a crafty clone. Didn't happen until 1996 or so however...diep wrote:Oh i recall now that i have the year wrong. It was more like 2001 or 2002.bob wrote:No feedback that I can recall. Only detailed conversations during the SE extensions stuff was between Bruce, myself and Dave Kittinger.diep wrote:Correction: it was around 1999-2000 that GCP extensively experimented with singular extensions in Crafty. Did you get any feedback from him at the time and/or source code changes?bob wrote:Not sure what you are talking about. This is code that I have from 10-15 years ago when several of us were experimenting with it. I am not quoting how anybody is doing this today, other than to mention that ip*, robo* used the _idea_ that you guys copied. I didn't give a single idea above that I do not already have code for (unfortunately most of it is going to need significant modification since crafty has changed a ton since the 1997 time frame when much of this was written and tested... When I played with this, one limit I tried was to check the TT bound. Didn't seem very useful to do the SE test on a position that returned a bound somewhat lower than current window...mcostalba wrote:Yes, idea (6) is not correctly laid-out, in particular this part is to be removed:bob wrote: Any other comments, corrections, additions or suggestions???
I suggest to better document yourself reading the sources (instead of being told about them) before to start your tests.Code: Select all
If the score appears to be "reasonably better" than the current search bound
Just like the null-move verification search doesn't work for me? And when I comment it out of SF I get a 2-3 elo improvement as well?
BTW if idea (6) doesn't work for you it means is an implementation bug
I'm not aware of how succesful his attempts were.
Seems he also had dug up how Hsu implemented it...
Vincent
I had big discussios with GCP on it. I remember crafty at the time lost too many plies with it to break even with it. Diep's using SE, some sort of Bruce implementation, however i still want to get rid of 'em as they deteriorate branching factor. I find that principle not ok.
Like objectively after e4 h5 Qxh5 then Rxh5 is a singular extensions.
It gives a cutoff and it is the only move and extending singular captures gives really a big boost in tactical strength. Yet the idea of extending Rxh5 here doesn't appear to be objectively correct to me.
Some versions it worked, then for some years they were turned off, latest setting is turned on - if i remember well.
First working implementation of SE diep had in 1994 in first diep version. I had invented SE myself actually, but not invented a reduction of search depth yet, so they were quite pricey. Well who cares if you just get 4 ply anyway. Which reduction would you want to have if your total search depth after 3 minutes is 5 or 6 ply?
I can remember the joy when they announced a mate in 6 against Bionic in my first tournament - which Bionic didn't see very quickly of course
Vincent
Bruce had good luck with SE (the cheapo version). It never seemed to pan out for me, but it may have simply been bad luck that I never hit on a decent tuning. I plan on addressing that version of SE once I finish testing the IP*-like version...