Singular extension improvement idea

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

jd1
Posts: 269
Joined: Wed Oct 24, 2012 2:07 am

Singular extension improvement idea

Post by jd1 »

Hi all,

I'm getting quite a significant improvement with the following idea - just wondered if anyone has tried it? Or might like to try it?

If:
- a) the singular exclusion search fails (i.e. one of the other moves fails high on a reduced depth search with minimal window @ ttValue - margin)
- b) the ttMove fails to produce a cutoff.

Then:
- I try the move that caused the cutoff in a) next regardless of where the move ordering would've put it otherwise. Note that because of LMR this also means it won't be reduced.

Note:
- the singular extension code is the same as in SF.
- the idea is that the move that caused the singular exclusion search to fail on a reduced lower beta search is more likely to produce a cutoff than any other move after the ttMove.
Cardoso
Posts: 362
Joined: Thu Mar 16, 2006 7:39 pm
Location: Portugal
Full name: Alvaro Cardoso

Re: Singular extension improvement idea

Post by Cardoso »

Hi,
So after searching the hash move, you try the move that failed high in the exclusion search.
IIRC the exclusion search window is something like [ttValue-margin, ttValue-margin+1], if margin is small then it might make sense to try the move that FH.
Also keep in mind that the exclusion search is at depth/2 wich isn't accurate.
I'd like to know what you mean by "a significant improvement".
How many games did you make, and the results.
I've tested SE many many times in checkers, in the last test I got an improvement by turning it off, by 5 ELO in 2000 games at 8 seconds per move using 3 threads.
I'ts painfull to do so many games and so many tests just to test an idea.
Usually I take a full week just to do 2000 games at 8 seconds per move.
I used several statistics to have a better idea of the SE idea behaviour, like the number of times the exclusion search was made, the number of times the SE produced an extension, the number of times the SE move really did fail high when searched full depth (plus the extension).
One restriction I do and don't see it in SF is the condition "ttValue >= beta" being true since to me it doesn't make sense to do exclusion searches if ttValue is below alpha.
Anyway I suppose the SE idea is to see if the hash move fails high, this makes the search return without searching the rest of the moves.
But this has a cost, the exclusion search itself (wich is tinny if done at depth/2) and the SE extension itself, wich will increase the tree size.
Anyway, my susggestion is that you make a final test against a version without SE.
I'm at work and I cant give more details about my tests, but when I get home I can give you the my test results.

best regards,
Alvaro
jd1
Posts: 269
Joined: Wed Oct 24, 2012 2:07 am

Re: Singular extension improvement idea

Post by jd1 »

Cardoso wrote:Hi,
So after searching the hash move, you try the move that failed high in the exclusion search.
IIRC the exclusion search window is something like [ttValue-margin, ttValue-margin+1], if margin is small then it might make sense to try the move that FH.
Also keep in mind that the exclusion search is at depth/2 wich isn't accurate.
I'd like to know what you mean by "a significant improvement".
How many games did you make, and the results.
I've tested SE many many times in checkers, in the last test I got an improvement by turning it off, by 5 ELO in 2000 games at 8 seconds per move using 3 threads.
I'ts painfull to do so many games and so many tests just to test an idea.
Usually I take a full week just to do 2000 games at 8 seconds per move.
I used several statistics to have a better idea of the SE idea behaviour, like the number of times the exclusion search was made, the number of times the SE produced an extension, the number of times the SE move really did fail high when searched full depth (plus the extension).
One restriction I do and don't see it in SF is the condition "ttValue >= beta" being true since to me it doesn't make sense to do exclusion searches if ttValue is below alpha.
Anyway I suppose the SE idea is to see if the hash move fails high, this makes the search return without searching the rest of the moves.
But this has a cost, the exclusion search itself (wich is tinny if done at depth/2) and the SE extension itself, wich will increase the tree size.
Anyway, my susggestion is that you make a final test against a version without SE.
I'm at work and I cant give more details about my tests, but when I get home I can give you the my test results.

best regards,
Alvaro
Hi Alvaro,

Thanks for your comments. That description of the idea is correct - basically a move-reordering with implications on LMR. I haven't made any tests against the version without SE, thanks for that suggestion. The SE change was already tested & committed first, hence I have been testing against the version with standard SE (as implemented in Stockfish). I have the following results:

1) +10 elo at 5000 games at 5' + 0.05'.

2) At 15' + 0.5' it is currently at roughly 1000 games (still running) but also tracking about +10 elo currently. Obviously this is not enough games and I should probably try a longer tc too, but I thought I may as well see if anyone has tried something similar.

As you see I test at much shorter time-controls, the engine does however get to depth 14 per move on average at 15' + 0.15' which is more than enough depth for SE (triggered at depth 6 in my implementation) to be used many times.

I haven't tried the ttValue >= beta restriction, however I do test (as in Stockfish) that the ttValue is a lowerbound or exact score. It makes sense to me to try that (ttValue >= beta) condition, although I wonder that since the ttValue may only be a lowerbound even if ttValue < beta the true value could be >= beta.

I haven't collected statistics yet, I should do so because it would be interesting and informative to see the stats you mentioned as well as:
a) how often the move that failed high in the exclusion search also causes a cutoff at full depth.
b) how often a move that was ordered ahead of the move that failed high in the exclusion search causes a cutoff (i.e. reordering was a loss).

Best regards,
Jerry
jd1
Posts: 269
Joined: Wed Oct 24, 2012 2:07 am

Re: Singular extension improvement idea

Post by jd1 »

I got some stats, however they are not that promising:
- when the exclusion search fails, the tt move doesn't cut off, and we reorder the moves: we fail low 65.5% of the time. Of the 34.5% cutoffs, 23.3% is from the move that failed high in the exclusion search, 12.2% is from some another move.
- unfortunately when turning off move reordering there were only a very few isolated cases where the move that failed high in the exclusion search:
a) was not already 2nd in the move ordering; and
b) caused a cutoff
- hence, it seems like my idea does not work - the move ordering is too good already and the depth/2 exclusion search is not accurate enough, at least in Toga II. In addition the Elo improvement in the test result at 15' + 0.15' has disappeared with more games. Perhaps there is something with LMR going on, however I therefore apologize for posting too soon. I got a little excited as the idea made sense to me and the preliminary results looked good.
Cardoso
Posts: 362
Joined: Thu Mar 16, 2006 7:39 pm
Location: Portugal
Full name: Alvaro Cardoso

Re: Singular extension improvement idea

Post by Cardoso »

Hi Jerry,
5000 games isn't bad and I think they can give some idea about the efectiveness of SE. At least for the resources I have. I know perfectly well it should be in the order of 20K or 30k games, but I simply don't have the patience to wait that long (probably 3 months at 8 seconds per move).

Here is some statistics I print from the starting position:

Code: Select all

nodes=3,130,935,158 SE_tries=96,848 0.3809% SE_move_is_singular=7,232 7.47% SE_FH=6,319 87.38%
SE_tries is the number of exclusion searches, and SE_FH is the number of times the SE move really did fail high after being searched at full depth plus the SE extension.
and from a sharp position that ends up winning:

Code: Select all

nodes=2,839,217,931 SE_tries=282,527 0.6844% SE_move_is_singular=99,137 35.09% SE_FH=96,293 97.13%
And yes I think it would be interesting to include a statistic for the move that failed high in the exclusion sarch in case the hash move failed low.
All of this info is important to have an idea of the behaviour of the algorithm.
With SE turned on and on sharp positions I see higher scores reported for the same iteration depth, but of course at the cost of an higher node count. So SE can indeed find a more accurate evaluation in sharp positions, but at the cost of a bigger tree size and hence more time.

best wishes,
Alvaro
Cardoso
Posts: 362
Joined: Thu Mar 16, 2006 7:39 pm
Location: Portugal
Full name: Alvaro Cardoso

Re: Singular extension improvement idea

Post by Cardoso »

Jerry,
That's ok, in fact your work helped to understand a little better SE.
Failures are of utmost importance, since we can learn a lot with them.
The trick is to keep trying.
Anyway I think SE has something of a myth attached to it since it came from Deep Blue. But they could afford it with all that hardware power.
The SF formulation is different from DB but I think SE only helps in the context of many threads, like above 20 threads. With fewer threads the cost is a bit high in terms of node count.
But chess and checkers are different games of course, and my experience is in checkers alone.
Another thing you can try is to only do SE at depths > 12 or 14 plies.
SF uses 8 plies, but the nearer we are to the tips, the less probable good stuff is propagated up to the root.
With:

Code: Select all

int SingularExtensionNode = depth >= 14 * PLY && ...
at least the hash move is more solid and the exclusion search is at least 7 plies depth, and since is nearer to the root, good stuff has more probability of surviving to the root.

Just keep trying, maybe in chess there is a good formulation for SE.

best regards,
Alvaro
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Singular extension improvement idea

Post by cdani »

Hello!
Andscacs SE should probably be a little better than Stockfish's:

Code: Select all

int jj = auxhash.score - 3 * &#40;depth / 2&#41;;
int score2 = alpha_beta<NO_PV>&#40;ss, jj - 1, jj, n, null_si, n + depth / 2 - 1, cutNode, true&#41;;
elcabesa
Posts: 855
Joined: Sun May 23, 2010 1:32 pm

Re: Singular extension improvement idea

Post by elcabesa »

I really like your idea and I'm giving it a try.
at the moment the first hack is running at 6s+0.1 and the result so far are:
4-1-7 [W-L-D] (0.625) +88Elo

in few hour we'll have some result
jd1
Posts: 269
Joined: Wed Oct 24, 2012 2:07 am

Re: Singular extension improvement idea

Post by jd1 »

elcabesa wrote:I really like your idea and I'm giving it a try.
at the moment the first hack is running at 6s+0.1 and the result so far are:
4-1-7 [W-L-D] (0.625) +88Elo

in few hour we'll have some result
Thanks Marco! I look forward to seeing the result.
elcabesa
Posts: 855
Joined: Sun May 23, 2010 1:32 pm

Re: Singular extension improvement idea

Post by elcabesa »

after some restart due to little debug and improvements:

145-143-318 [0.502] +1.15Elo +/- 19.06