Stockfish 1.8 tweaks

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

mcostalba wrote:
QED wrote:Original stockfish lost to SMRC +186 =598 -216 (LOS=15:84), that looks promising. Next test will be SMRC version of adaptive Tinapa.
Hi Vratko, your idea is interesting, but I have to look a bit further because is not easy to understand for me.
Ok, I have read better your patch. From what I have understood let's say we have following position:

[D] 4r3/8/3k4/3p4/8/8/1K6/3R4 b - - 0 2

Suppose that black tries move d4 and search it, now at next ply white has the SE move Rxe4, you say, that because Rxe4 is a SE move then we should not reduce the original d4 moves, but this looks strange to me becuase d4 is a clear error and I would like it to fail-low as fast as possible so to avoid wasting searching time on a clear bad variation.

Am I missing something ?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Stockfish 1.8 tweaks

Post by bob »

mcostalba wrote:
mcostalba wrote:
QED wrote:Original stockfish lost to SMRC +186 =598 -216 (LOS=15:84), that looks promising. Next test will be SMRC version of adaptive Tinapa.
Hi Vratko, your idea is interesting, but I have to look a bit further because is not easy to understand for me.
Ok, I have read better your patch. From what I have understood let's say we have following position:

[D] 4r3/8/3k4/3p4/8/8/1K6/3R4 b - - 0 2

Suppose that black tries move d4 and search it, now at next ply white has the SE move Rxe4, you say, that because Rxe4 is a SE move then we should not reduce the original d4 moves, but this looks strange to me becuase d4 is a clear error and I would like it to fail-low as fast as possible so to avoid wasting searching time on a clear bad variation.

Am I missing something ?
Couple of points. First, I don't think Rxe4 should be a SE move. Why would we want to search this move deeper? It is obvious, it must fail high even without the extra ply since the score is changing by a pawn.

Second, this is one of the things I didn't like about this idea when it first surfaced. It is suggesting that we test a move for singularity just because it is best in some position and gets stored. Hsu more correctly identifies moves as singular by analyzing the position, whether it be a CUT or PV node. If you enter this position for the first time, Re4 doesn't get extended. Yet the next ply when you try it again, it now gets searched _two_ plies deeper. Why? If the TT is big enough you will test a big piece of the moves that failed high the last iteration, but that's it except for the transpositions from this search.
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Marco Costalba wrote:Ok, I have read better your patch. From what I have understood let's say we have following position:

[D] 4r3/8/3k4/3p4/8/8/1K6/3R4 b - - 0 2

Suppose that black tries move d4 and search it, now at next ply white has the SE move Rxe4, you say, that because Rxe4 is a SE move then we should not reduce the original d4 moves, but this looks strange to me becuase d4 is a clear error and I would like it to fail-low as fast as possible so to avoid wasting searching time on a clear bad variation.

Am I missing something ?
The question is, if d4 is really clear error. Original stockfish still extends Rxd4 by one ply, because it really is a singular move. Therefore d4 forces things, so it is a dangerous move. In cases where it is clear error, aggresive null move pruning (or at low depths razoring) will keep the subtree small anyway, and the slowdown may be compensated by sometimes finding deep forced tactics at lower depths.

I have developed SMRC primarily to deal with serial zugzwang positions. I must confess I had not really read from code what the singular move is, and now I am surprised that it is detected by comparing other moves to ttValue minus margin. I thought it was alpha minus margin. :oops:

In alpha case, d4 would probably not be a threat (comparing to margin), so Rxd4 would be not singular refutation, so d4 would be not dangerous and it would stay reduced (and Rxd4 not extended). But there probably is a good reason why the condition uses ttValue (minus margin), so I will think about it more deeply.

EDIT: Typos.
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

bob wrote: Couple of points. First, I don't think Rxe4 should be a SE move. Why would we want to search this move deeper? It is obvious, it must fail high even without the extra ply since the score is changing by a pawn.
I think to avoid the reader to misunderstand and make a mess out of this we should really try to differentiate the arguments.

(1) One thing is if Rxe4 turns out to be a SE move, i.e. if applying the SE algorithm base on the TT table as is in SF and as you are now testing Rxe4 is detected as an SE move and my answer is: yes, current algorithm detects Rxe4 as SE.

(2) Another _different_ point is if is a good idea to improve upon current algorithm so that it does not detects Rxe4 as SE anymore.

I was arguing about point (1) I never mentioned point (2) nor I am interested in this moment. If instead you are already at point (2) you may want better specify that you have changed subject otherwise people get condused.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

QED wrote: The question is, if d4 is really clear error. Original stockfish still extends Rxd4 by one ply, because it really is a singular move. Therefore d4 forces things, so it is a dangerous move. In cases where it is clear error, aggresive null move pruning (or at low depths razoring) will keep the subtree small anyway, and the slowdown may be compensated by sometimes finding deep forced tactics at lower depths.
Yes, this is also what I think (but not proven): Rxd4 like a lot of cut moves to bad previous moves get unnecesarly extended, but the bunch of 'far form beta at low depth' pruning arsenal should take care of it with small overhead. While at the same time you are able to discover real threats in the cases where SE actually is the proper way to go.
QED wrote: But there probably is a good reason why the condition uses ttValue (minus margin), so I will think about it more deeply.
I also think like this. I never tested SE thorowing in draft (distance to beta) in the conditions but my feeling is that is not a good idea: bad lines get pruned anyhow well before reaching qsearch level and, becasue SE is active at relative high depths, still far from leaves, you cannot trust current static position evaluation (or TT value) too much, many things could happen before to reach qsearch().
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Robert Hyatt wrote:It is suggesting that we test a move for singularity just because it is best in some position and gets stored.
It depends. In stockfish, after IID, we set "ttMove = ss->bestMove;" and IID search skips nullmove, so everytime condition for IID is fulfilled, we have ttMove to test for singularity. Still a valid point for cases IID is not applied.
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Stockfish 1.8 tweaks

Post by bob »

QED wrote:
Robert Hyatt wrote:It is suggesting that we test a move for singularity just because it is best in some position and gets stored.
It depends. In stockfish, after IID, we set "ttMove = ss->bestMove;" and IID search skips nullmove, so everytime condition for IID is fulfilled, we have ttMove to test for singularity. Still a valid point for cases IID is not applied.
Do you do IID anywhere but along the PV? I don't think it makes a lot of sense to go beyond that...
zamar
Posts: 613
Joined: Sun Jan 18, 2009 7:03 am

Re: Stockfish 1.8 tweaks

Post by zamar »

mcostalba wrote:
QED wrote: But there probably is a good reason why the condition uses ttValue (minus margin), so I will think about it more deeply.
I also think like this. I never tested SE thorowing in draft (distance to beta) in the conditions but my feeling is that is not a good idea: bad lines get pruned anyhow well before reaching qsearch level and, becasue SE is active at relative high depths, still far from leaves, you cannot trust current static position evaluation (or TT value) too much, many things could happen before to reach qsearch().
My intuition also says the same: SE is a great way to extend speculative lines where one side sacrifises a lot of material for king attack or to deliver perpetual check. In these cases static eval can be much below beta.
Joona Kiiski
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Stockfish 1.8 tweaks

Post by Daniel Shawul »

Stockfish's IID should not help at all because the criteria for IID and singular search do not match..
IID is done withe depth / 2, while singularity tests are done for a tt move searched to atleast depth - 3.
That means for depth >= 8 singularity tests nothing comes from IID..
The way I did it in scorpio was to use depth - 4 for both so that IID gives me a move to always test for singularity.
Also you have a 'fail high' node condition for IID tests which causes some mismatch even if the depths were the same.
Edsel Apostol
Posts: 803
Joined: Mon Jul 17, 2006 5:53 am
Full name: Edsel Apostol

Re: Stockfish 1.8 tweaks

Post by Edsel Apostol »

Daniel Shawul wrote:Stockfish's IID should not help at all because the criteria for IID and singular search do not match..
IID is done withe depth / 2, while singularity tests are done for a tt move searched to atleast depth - 3.
That means for depth >= 8 singularity tests nothing comes from IID..
The way I did it in scorpio was to use depth - 4 for both so that IID gives me a move to always test for singularity.
Also you have a 'fail high' node condition for IID tests which causes some mismatch even if the depths were the same.
Hi Dan, does that "depth-4" implementation better than that of SF? I've tried to experiment with making use of bestmove from IID for SE but my limited testing can't really determine if it's better or not.