Singular Extensions revisited

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Singular Extensions revisited

Post by mjlef »

Have any details on how various authors been published on implementing singular extensions?

Playing around a bit, the only think I can come up that makes sense (and has tested OK) is in PV and expected CUT nodes use the hash move/value and a reduced depth search to prove one move is better than the others by some margin. It is not too expensive (as long as the reduced depth search is shallow enough to not cost too may nodes).

Any references I can look at?

Mark
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Singular Extensions revisited

Post by bob »

mjlef wrote:Have any details on how various authors been published on implementing singular extensions?

Playing around a bit, the only think I can come up that makes sense (and has tested OK) is in PV and expected CUT nodes use the hash move/value and a reduced depth search to prove one move is better than the others by some margin. It is not too expensive (as long as the reduced depth search is shallow enough to not cost too may nodes).

Any references I can look at?

Mark
I've done it two ways. In the 1993/1994 version of Cray Blitz, I went to the agonizing trouble of doing basically what Hsu/Campbell described in their JICCA paper. It was an expensive approach.

In Past versions of Crafty, I did a simpler approach as used by Bruce Moreland in later versions of Ferret. What we did was at the start of search, we first searched thru every move with reduced depth, When we found the score for the first move, we searched the rest with an offset (down) window, hoping that nothing would fail high. If none did, we considered the first move "singular" and remembered it. We did the normal search but searched this singular move to a deeper depth. If any move failed high on the lowered window, I gave up and said "nothing is singular" although this is not completely accurate. It might be that the fail-high move is actually significantly better than the best so far, and it is now singular. In Cray Blitz, I handled this as Hsu/Campbell explained which was messy as hell. In the Crafty version, I just threw my hands up on a fail high and said "nothing is singular" and went about my business as normal...
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Singular Extensions revisited

Post by mjlef »

bob wrote:
mjlef wrote:Have any details on how various authors been published on implementing singular extensions?

Playing around a bit, the only think I can come up that makes sense (and has tested OK) is in PV and expected CUT nodes use the hash move/value and a reduced depth search to prove one move is better than the others by some margin. It is not too expensive (as long as the reduced depth search is shallow enough to not cost too may nodes).

Any references I can look at?

Mark
I've done it two ways. In the 1993/1994 version of Cray Blitz, I went to the agonizing trouble of doing basically what Hsu/Campbell described in their JICCA paper. It was an expensive approach.

In Past versions of Crafty, I did a simpler approach as used by Bruce Moreland in later versions of Ferret. What we did was at the start of search, we first searched thru every move with reduced depth, When we found the score for the first move, we searched the rest with an offset (down) window, hoping that nothing would fail high. If none did, we considered the first move "singular" and remembered it. We did the normal search but searched this singular move to a deeper depth. If any move failed high on the lowered window, I gave up and said "nothing is singular" although this is not completely accurate. It might be that the fail-high move is actually significantly better than the best so far, and it is now singular. In Cray Blitz, I handled this as Hsu/Campbell explained which was messy as hell. In the Crafty version, I just threw my hands up on a fail high and said "nothing is singular" and went about my business as normal...
I have tried something like that. But it never seemed to help much. Did it help Crafty, or did you give up on it?

I really think this is how humans think. They determine one move is forced since the rest are worse, and it makes us thing deeper about a move. Something I think computers could be great at if the conditions are tuned.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Singular Extensions revisited

Post by bob »

mjlef wrote:
bob wrote:
mjlef wrote:Have any details on how various authors been published on implementing singular extensions?

Playing around a bit, the only think I can come up that makes sense (and has tested OK) is in PV and expected CUT nodes use the hash move/value and a reduced depth search to prove one move is better than the others by some margin. It is not too expensive (as long as the reduced depth search is shallow enough to not cost too may nodes).

Any references I can look at?

Mark
I've done it two ways. In the 1993/1994 version of Cray Blitz, I went to the agonizing trouble of doing basically what Hsu/Campbell described in their JICCA paper. It was an expensive approach.

In Past versions of Crafty, I did a simpler approach as used by Bruce Moreland in later versions of Ferret. What we did was at the start of search, we first searched thru every move with reduced depth, When we found the score for the first move, we searched the rest with an offset (down) window, hoping that nothing would fail high. If none did, we considered the first move "singular" and remembered it. We did the normal search but searched this singular move to a deeper depth. If any move failed high on the lowered window, I gave up and said "nothing is singular" although this is not completely accurate. It might be that the fail-high move is actually significantly better than the best so far, and it is now singular. In Cray Blitz, I handled this as Hsu/Campbell explained which was messy as hell. In the Crafty version, I just threw my hands up on a fail high and said "nothing is singular" and went about my business as normal...
I have tried something like that. But it never seemed to help much. Did it help Crafty, or did you give up on it?

I really think this is how humans think. They determine one move is forced since the rest are worse, and it makes us thing deeper about a move. Something I think computers could be great at if the conditions are tuned.
I could not tell a difference, but this was prior to cluster testing days. I saved the "singular" test source code to try again one day. There is also another version that has some potential, the old null-move threat detection idea. Goes like this:

When a move fails high, before you return beta, you do a quick null-move search (normal null-move but depth reduction amount is open for debate). If the null-move search fails low, you now have a quandary. The current move fails high. Doing nothing fails low. It is therefore likely that the move you failed high on is a move that is holding off some serious threat. Before you fail high, you first re-search with an additional ply, and use the results of this rather than the original score. If it is enough to fail high also, do so. If not, then you try to see if you can find a better move. But each time you find one, you can re-apply this same logic to determine if something is up with the move and a potential threat.
Gian-Carlo Pascutto
Posts: 1243
Joined: Sat Dec 13, 2008 7:00 pm

Re: Singular Extensions revisited

Post by Gian-Carlo Pascutto »

bob wrote: When a move fails high, before you return beta, you do a quick null-move search (normal null-move but depth reduction amount is open for debate). If the null-move search fails low, you now have a quandary. The current move fails high. Doing nothing fails low. It is therefore likely that the move you failed high on is a move that is holding off some serious threat.
It is probably more likely that you are recapturing.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Singular Extensions revisited

Post by bob »

Gian-Carlo Pascutto wrote:
bob wrote: When a move fails high, before you return beta, you do a quick null-move search (normal null-move but depth reduction amount is open for debate). If the null-move search fails low, you now have a quandary. The current move fails high. Doing nothing fails low. It is therefore likely that the move you failed high on is a move that is holding off some serious threat.
It is probably more likely that you are recapturing.
That case was specifically excluded. I think Chrilly mentioned that in the paper where he described this null-move threat idea. In fact, that case is the one that causes any sort of SE approach the most trouble, as it will obviously be singular (if there is only one way to recapture) and trigger an extension that might be unwanted.

There were some details about being in check as well, I would have to go back and look at the old Crafty source for that to see.
yoshiharu
Posts: 56
Joined: Sat Nov 11, 2006 11:14 pm

Re: Singular Extensions revisited

Post by yoshiharu »

bob wrote: I could not tell a difference, but this was prior to cluster testing days. I saved the "singular" test source code to try again one day. ...
I will wait :-)
bob wrote: There is also another version that has some potential, the old null-move threat detection idea. Goes like this:

When a move fails high, before you return beta, you do a quick null-move search (normal null-move but depth reduction amount is open for debate). If the null-move search fails low, you now have a quandary. The current move fails high. Doing nothing fails low.
I'm probably missing something, but aren't you supposed to have this information already? If the null move you already searched fails low, and find a fail high nonetheless, isn't that enough to trigger this threat detection?

Cheers, mr
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Singular Extensions revisited

Post by bob »

yoshiharu wrote:
bob wrote: I could not tell a difference, but this was prior to cluster testing days. I saved the "singular" test source code to try again one day. ...
I will wait :-)
bob wrote: There is also another version that has some potential, the old null-move threat detection idea. Goes like this:

When a move fails high, before you return beta, you do a quick null-move search (normal null-move but depth reduction amount is open for debate). If the null-move search fails low, you now have a quandary. The current move fails high. Doing nothing fails low.
I'm probably missing something, but aren't you supposed to have this information already? If the null move you already searched fails low, and find a fail high nonetheless, isn't that enough to trigger this threat detection?

Cheers, mr
Not quite. Lots of null-move searches fail low because you use the same window as the normal search. However, if you shift the alpha/beta window down by (say) 1/2 pawn, now it really should fail high all the time. If it doesn't, then this position is not just simply "not great", it is really "not good at all". And _that_ is what triggers the extension. If you want to experiment, take a program that uses null-move and make the simple change so that when a null-move search fails low, before you continue the normal search and start trying real moves, do a second null-move search with an offset window like beta-pawn and beta-pawn+1. This will often fail high when the other one did not, because of the pawn margin. If it doesn't, then this position must be pretty bad. And this is what we would use to trigger the extension.

Again, the idea is that one move fails high, a downward-offset null-move search fails low, which is a hint that this one move is really holding off some threat of the opponent.
yoshiharu
Posts: 56
Joined: Sat Nov 11, 2006 11:14 pm

Re: Singular Extensions revisited

Post by yoshiharu »

bob wrote: Not quite. Lots of null-move searches fail low because you use the same window as the normal search. However, if you shift the alpha/beta window down by (say) 1/2 pawn, now it really should fail high all the time. If it doesn't, then this position is not just simply "not great", it is really "not good at all". And _that_ is what triggers the extension.
Ok, now I know what I missed :-)
I implemented a trick like this when I wrote the null-move part of my engine, but I had so many issues to fix elsewhere, that I decided to keep it simple, for the moment.
So, do you suggest to do this threat detection only in case of a fail high (re-search at increased depth if triggered) or at every given node (avoids re-searches, but extends every move also in desperate positions)?
Intuitively I would prefer the former, but maybe there are issues with re-searches...

Cheers, mr
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Singular Extensions revisited

Post by bob »

yoshiharu wrote:
bob wrote: Not quite. Lots of null-move searches fail low because you use the same window as the normal search. However, if you shift the alpha/beta window down by (say) 1/2 pawn, now it really should fail high all the time. If it doesn't, then this position is not just simply "not great", it is really "not good at all". And _that_ is what triggers the extension.
Ok, now I know what I missed :-)
I implemented a trick like this when I wrote the null-move part of my engine, but I had so many issues to fix elsewhere, that I decided to keep it simple, for the moment.
So, do you suggest to do this threat detection only in case of a fail high (re-search at increased depth if triggered) or at every given node (avoids re-searches, but extends every move also in desperate positions)?
Intuitively I would prefer the former, but maybe there are issues with re-searches...

Cheers, mr
I only did it on fail-high (or PV of course) nodes, because there is no benefit to doing it on an ALL node since none will be candidates as a "best move" to trigger the extension. I deferred the downward-offset null-window search until I knew I had a "best move" that might need extension, to avoid doing the offset null-window search on ALL nodes where it is simply a waste of time.

The two null-window searches we do are different. At the top of search, you usually use beta-1, beta for the null-window search to get out of searching anything should this fail high. This search uses beta-pawn-1 and beta-pawn (the "pawn" value can be varied. The smaller the value, the more extensions you will trigger since it is more likely to fail low with higher alpha bounds. We are reducing the alpha bound already. As you lower the window, the chance of a fail low drops and the total number of these extensions drop. In looking at my old code, the margin I used was -1.5 pawns it turns out. Whether that was optimal or not I do not know. I have this on my list of things to check at some point in time, when I run out of other ideas.