Stockfish 1.8 tweaks

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Stockfish 1.8 tweaks

Post by Daniel Shawul »

I picked the number 4 as a compromise. It makes the IID relatively cheap
while at the same time giving a move you would consider good enough to be
tested for singularity. Depth/2 is too aggressive in this sense..
The condition is similar to the common "AVOID NULL" condition.

Code: Select all

			if&#40;depth - 4 * UNITDEPTH <= h_depth 
				&& &#40;flags == UPPER && score < beta&#41;)
				return AVOID_NULL;

			if&#40;depth - 4 * UNITDEPTH <= h_depth
				&& ( &#40;flags == EXACT && score > alpha&#41; 
				  || &#40;flags == LOWER && score >= beta&#41;))
				return HASH_GOOD;

			return HASH_HIT;
I can't say 4 is the optimum because SE did not work for me in the first place.
For all I know depth / 2 could be better, but to get anything out of the IID
the depth - 3 requirement should be changed accordingly or dropped all in all.
And in my tests if you test any move (not good enough move) out of TT, it is bad
YMMV
Edsel Apostol
Posts: 803
Joined: Mon Jul 17, 2006 5:53 am
Full name: Edsel Apostol

Re: Stockfish 1.8 tweaks

Post by Edsel Apostol »

Daniel Shawul wrote:I picked the number 4 as a compromise. It makes the IID relatively cheap
while at the same time giving a move you would consider good enough to be
tested for singularity. Depth/2 is too aggressive in this sense..
The condition is similar to the common "AVOID NULL" condition.

Code: Select all

			if&#40;depth - 4 * UNITDEPTH <= h_depth 
				&& &#40;flags == UPPER && score < beta&#41;)
				return AVOID_NULL;

			if&#40;depth - 4 * UNITDEPTH <= h_depth
				&& ( &#40;flags == EXACT && score > alpha&#41; 
				  || &#40;flags == LOWER && score >= beta&#41;))
				return HASH_GOOD;

			return HASH_HIT;
I can't say 4 is the optimum because SE did not work for me in the first place.
For all I know depth / 2 could be better, but to get anything out of the IID
the depth - 3 requirement should be changed accordingly or dropped all in all.
And in my tests if you test any move (not good enough move) out of TT, it is bad
YMMV
I don't understand why you would use something similar to AVOID_NULL here as condition. It seems too restrictive and might miss a lot of positions where SE should be beneficial. Maybe that's the reason why it didn't work for you.

In my implementation as long as there is a hash move retrieved from the hash table and the hash depth is within reasonable distance from the current depth and the reduced depth search sans the hashmove with bounds reduced by a margin from the hash score returns less than the passed bounds, I consider it already as singular. Simple and it works well with my experimental version here.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Stockfish 1.8 tweaks

Post by Daniel Shawul »

I don't understand why you would use something similar to AVOID_NULL here as condition. It seems too restrictive and might miss a lot of positions where SE should be beneficial. Maybe that's the reason why it didn't work for you.
No you misunderstood. AVOID_NULL has nothing to do with singularity whatsoever.
I just presented here to show how similar the conditions are. They both check
whether the previous search is within reasonable depth before deciding to cut off their
own way.
In my implementation as long as there is a hash move retrieved from the hash table and the hash depth is within reasonable distance from the current depth and the reduced depth search sans the hashmove with bounds reduced by a margin from the hash score returns less than the passed bounds, I consider it already as singular. Simple and it works well with my experimental version here.
I am confused. You just said SE didn't work for you in your previous post.
Edit : Ok I see what you meant is IID didn't add anything on top of SE.
Anyway what you described here is what I did for my test except with different depths and margins apparently. We will see how much it is worth once Bob's tests are finished.
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Daniel Shawul wrote:Stockfish's IID should not help at all because the criteria for IID and singular search do not match..
IID is done withe depth / 2, while singularity tests are done for a tt move searched to atleast depth - 3.
That means for depth >= 8 singularity tests nothing comes from IID..
The way I did it in scorpio was to use depth - 4 for both so that IID gives me a move to always test for singularity.
Also you have a 'fail high' node condition for IID tests which causes some mismatch even if the depths were the same.
Yes, exactly. Another thing about SE implementation in Stockfish I have not read well. :oops: When I found this, I have also chosen depth - 4 both as IID depth (in nonpv nodes) and limit for TT depth to consider node singular.
As for 'fail high' condition, also true. I will drop (!isCheck && ss->eval >= beta - IIDMargin) condition from IID in the next test.
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Vratko Polák wrote:But there probably is a good reason why the condition uses ttValue (minus margin), so I will think about it more deeply.
At higher depths, ttValue tends to be near previous beta, so it is probably not far away (compared to margin) from current alpha. Also, the margin is tuned for using ttValue. Except that, I have found no valid reason to use ttValue instead of alpha.

Anyway, I have run a third test for SMRC, this time it was Tinapa with verification reduction of (R+2). The original Stockfish lost +186 =592 -222 (LOS=11:88). So, the original would lost to combined entity "SMRC Stockfish with various null-move settings" by +564 =1801 -635 (LOS=8:91), and that means very high probability of at least one of the patches being better than original Stockfish. Here is the tested patch:

Code: Select all

diff -dur src-Ch/search.cpp src-TiAv0SmrcCh/search.cpp
--- src-Ch/search.cpp   2010-07-09 13&#58;04&#58;18.000000000 +0200
+++ src-TiAv0SmrcCh/search.cpp  2010-07-31 21&#58;21&#58;33.000000000 +0200
@@ -1058,7 +1058,7 @@
     const TTEntry* tte;
     Key posKey;
     Move ttMove, move, excludedMove;
-    Depth ext, newDepth;
+    Depth ext, newDepth, oldDepth = depth;
     Value bestValue, value, oldAlpha;
     Value refinedValue, nullValue, futilityValueScaled; // Non-PV specific
     bool isCheck, singleEvasion, singularExtensionNode, moveIsCheck, captureOrPromotion, dangerous;
@@ -1185,7 +1185,7 @@
         ss->currentMove = MOVE_NULL;

         // Null move dynamic reduction based on depth
-        int R = 3 + &#40;depth >= 5 * OnePly ? depth / 8 &#58; 0&#41;;
+        int R = 3 + &#40;depth > 4*OnePly ? &#40;int&#40;depth&#41; - 3*int&#40;OnePly&#41;/2&#41; / &#40;3*int&#40;OnePly&#41;) &#58; 0&#41;; // inspired by Tinapa 1.01

         // Null move dynamic reduction based on value
         if &#40;refinedValue - beta > PawnValueMidgame&#41;
@@ -1206,11 +1206,13 @@
                 nullValue = beta;

             // Do zugzwang verification search at high depths
-            if &#40;depth < 6 * OnePly&#41;
+            if &#40;depth-&#40;R+2&#41;*OnePly < OnePly&#41;
                 return nullValue;

             ss->skipNullMove = true;
-            Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-5*OnePly, ply&#41;;
+            &#40;ss-1&#41;->reduction += &#40;R+2&#41;*OnePly;
+            Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-&#40;R+2&#41;*OnePly, ply&#41;;
+            &#40;ss-1&#41;->reduction -= &#40;R+2&#41;*OnePly;
             ss->skipNullMove = false;

             if &#40;v >= beta&#41;
@@ -1285,8 +1287,7 @@
       // its siblings. To verify this we do a reduced search on all the other moves but the
       // ttMove, if result is lower then ttValue minus a margin then we extend ttMove.
       if (   singularExtensionNode
-          && move == tte->move&#40;)
-          && ext < OnePly&#41;
+          && move == tte->move&#40;))
       &#123;
           Value ttValue = value_from_tt&#40;tte->value&#40;), ply&#41;;

@@ -1295,12 +1296,17 @@
               Value b = ttValue - SingularExtensionMargin;
               ss->excludedMove = move;
               ss->skipNullMove = true;
-              Value v = search<NonPV>&#40;pos, ss, b - 1, b, depth / 2, ply&#41;;
+              assert&#40;&#40;depth + &#40;ss-1&#41;->reduction&#41; / 2 <= depth&#41;;
+              Value v = search<NonPV>&#40;pos, ss, b - 1, b, &#40;depth + &#40;ss-1&#41;->reduction&#41; / 2, ply&#41;;
               ss->skipNullMove = false;
               ss->excludedMove = MOVE_NONE;
               if &#40;v < ttValue - SingularExtensionMargin&#41;
                   ext = OnePly;
+              else
+                  singularExtensionNode = false;
           &#125;
+          else
+              singularExtensionNode = false;
       &#125;

       newDepth = depth - OnePly + ext;
@@ -1396,6 +1402,19 @@
           &#125;
       &#125;

+      // Singular Move Reduction Cancellation.
+      if (   singularExtensionNode
+          && ttMove == move
+          && &#40;ss-1&#41;->reduction
+          && value >= beta&#41;
+      &#123;
+          depth += &#40;ss-1&#41;->reduction;
+          newDepth += &#40;ss-1&#41;->reduction;
+          assert&#40;PvNode == NonPV&#41;;
+          value = newDepth < OnePly ? -qsearch<NonPV>&#40;pos, ss+1, -&#40;alpha+1&#41;, -alpha, Depth&#40;0&#41;, ply+1&#41;
+                                    &#58; - search<NonPV>&#40;pos, ss+1, -&#40;alpha+1&#41;, -alpha, newDepth, ply+1&#41;;
+      &#125;
+
       // Step 16. Undo move
       pos.undo_move&#40;move&#41;;

@@ -1444,7 +1463,7 @@

     ValueType f = &#40;bestValue <= oldAlpha ? VALUE_TYPE_UPPER &#58; bestValue >= beta ? VALUE_TYPE_LOWER &#58; VALUE_TYPE_EXACT&#41;;
     move = &#40;bestValue <= oldAlpha ? MOVE_NONE &#58; ss->bestMove&#41;;
-    TT.store&#40;posKey, value_to_tt&#40;bestValue, ply&#41;, f, depth, move, ss->eval, ei.kingDanger&#91;pos.side_to_move&#40;)&#93;);
+    TT.store&#40;posKey, value_to_tt&#40;bestValue, ply&#41;, f, oldDepth, move, ss->eval, ei.kingDanger&#91;pos.side_to_move&#40;)&#93;);

     // Update killers and history only for non capture moves that fails high
     if &#40;bestValue >= beta&#41;
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Vratko Polák wrote:
Vratko Polák wrote:But there probably is a good reason why the condition uses ttValue (minus margin), so I will think about it more deeply.
At higher depths, ttValue tends to be near previous beta, so it is probably not far away (compared to margin) from current alpha. Also, the margin is tuned for using ttValue. Except that, I have found no valid reason to use ttValue instead of alpha.
So, here is another testing result. The patch was using alpha instead of ttValue for singular move detection, nonpv IID uses depth - 4 * OnePly, and that was also the minimal requirement for tte->depth() in singular move detection. Also, I have kept SMRC in, to obtain better results. :wink: The original stockfish has lost +189 =577 -234 (LOS=6:93), so it looks this patch can be considered better (bayeselo says better by 12+-17 elo, so nothing certain). This patch still has 'fail high' condition in. Here is the patch:

Code: Select all

diff -dur src-Ch/search.cpp src-AsmSmrcCh/search.cpp
--- src-Ch/search.cpp   2010-07-09 13&#58;04&#58;18.000000000 +0200
+++ src-AsmSmrcCh/search.cpp    2010-08-01 22&#58;36&#58;25.000000000 +0200
@@ -1058,7 +1058,7 @@
     const TTEntry* tte;
     Key posKey;
     Move ttMove, move, excludedMove;
-    Depth ext, newDepth;
+    Depth ext, newDepth, oldDepth = depth;
     Value bestValue, value, oldAlpha;
     Value refinedValue, nullValue, futilityValueScaled; // Non-PV specific
     bool isCheck, singleEvasion, singularExtensionNode, moveIsCheck, captureOrPromotion, dangerous;
@@ -1210,7 +1210,9 @@
                 return nullValue;

             ss->skipNullMove = true;
+            &#40;ss-1&#41;->reduction += 5*OnePly;
             Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-5*OnePly, ply&#41;;
+            &#40;ss-1&#41;->reduction -= 5*OnePly;
             ss->skipNullMove = false;

             if &#40;v >= beta&#41;
@@ -1240,7 +1242,7 @@
         &&  ttMove == MOVE_NONE
         && &#40;PvNode || (!isCheck && ss->eval >= beta - IIDMargin&#41;))
     &#123;
-        Depth d = &#40;PvNode ? depth - 2 * OnePly &#58; depth / 2&#41;;
+        Depth d = &#40;PvNode ? depth - 2 * OnePly &#58; depth - 4 * OnePly&#41;;

         ss->skipNullMove = true;
         search<PvNode>&#40;pos, ss, alpha, beta, d, ply&#41;;
@@ -1262,7 +1264,7 @@
                            && tte && tte->move&#40;)
                            && !excludedMove // Do not allow recursive singular extension search
                            && is_lower_bound&#40;tte->type&#40;))
-                           && tte->depth&#40;) >= depth - 3 * OnePly;
+                           && tte->depth&#40;) >= depth - 4 * OnePly;

     // Step 10. Loop through moves
     // Loop through all legal moves until no moves remain or a beta cutoff occurs
@@ -1285,22 +1287,24 @@
       // its siblings. To verify this we do a reduced search on all the other moves but the
       // ttMove, if result is lower then ttValue minus a margin then we extend ttMove.
       if (   singularExtensionNode
-          && move == tte->move&#40;)
-          && ext < OnePly&#41;
+          && move == tte->move&#40;))
       &#123;
-          Value ttValue = value_from_tt&#40;tte->value&#40;), ply&#41;;
-
-          if &#40;abs&#40;ttValue&#41; < VALUE_KNOWN_WIN&#41;
+          if &#40;abs&#40;alpha&#41; < VALUE_KNOWN_WIN&#41;
           &#123;
-              Value b = ttValue - SingularExtensionMargin;
+              Value b = alpha - SingularExtensionMargin;
               ss->excludedMove = move;
               ss->skipNullMove = true;
-              Value v = search<NonPV>&#40;pos, ss, b - 1, b, depth / 2, ply&#41;;
+              assert&#40;&#40;depth + &#40;ss-1&#41;->reduction&#41; / 2 <= depth&#41;;
+              Value v = search<NonPV>&#40;pos, ss, b - 1, b, &#40;depth + &#40;ss-1&#41;->reduction&#41; / 2, ply&#41;;
               ss->skipNullMove = false;
               ss->excludedMove = MOVE_NONE;
-              if &#40;v < ttValue - SingularExtensionMargin&#41;
+              if &#40;v < alpha - SingularExtensionMargin&#41;
                   ext = OnePly;
+              else
+                  singularExtensionNode = false;
           &#125;
+          else
+              singularExtensionNode = false;
       &#125;

       newDepth = depth - OnePly + ext;
@@ -1396,6 +1400,19 @@
           &#125;
       &#125;

+      // Singular Move Reduction Cancellation.
+      if (   singularExtensionNode
+          && ttMove == move
+          && &#40;ss-1&#41;->reduction
+          && value >= beta&#41;
+      &#123;
+          depth += &#40;ss-1&#41;->reduction;
+          newDepth += &#40;ss-1&#41;->reduction;
+          assert&#40;PvNode == NonPV&#41;;
+          value = newDepth < OnePly ? -qsearch<NonPV>&#40;pos, ss+1, -&#40;alpha+1&#41;, -alpha, Depth&#40;0&#41;, ply+1&#41;
+                                    &#58; - search<NonPV>&#40;pos, ss+1, -&#40;alpha+1&#41;, -alpha, newDepth, ply+1&#41;;
+      &#125;
+
       // Step 16. Undo move
       pos.undo_move&#40;move&#41;;

@@ -1444,7 +1461,7 @@

     ValueType f = &#40;bestValue <= oldAlpha ? VALUE_TYPE_UPPER &#58; bestValue >= beta ? VALUE_TYPE_LOWER &#58; VALUE_TYPE_EXACT&#41;;
     move = &#40;bestValue <= oldAlpha ? MOVE_NONE &#58; ss->bestMove&#41;;
-    TT.store&#40;posKey, value_to_tt&#40;bestValue, ply&#41;, f, depth, move, ss->eval, ei.kingDanger&#91;pos.side_to_move&#40;)&#93;);
+    TT.store&#40;posKey, value_to_tt&#40;bestValue, ply&#41;, f, oldDepth, move, ss->eval, ei.kingDanger&#91;pos.side_to_move&#40;)&#93;);

     // Update killers and history only for non capture moves that fails high
     if &#40;bestValue >= beta&#41;
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Vratko Polák wrote:As for 'fail high' condition, also true. I will drop (!isCheck && ss->eval >= beta - IIDMargin) condition from IID in the next test.
In the test the original stockfish lost, but with LOS only 39:60, that means the patch was probably weak for a SMRC version. It was only a quick patch, now a polished version is being tested.

I am thinking about moving my attention to other areas, maybe move ordering or even evaluation, and it may happen that I will not have a promising patch ready for tommorow testing. So, if anyone has an interesting patch, I may test it.
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Stockfish 1.8 tweaks

Post by Daniel Shawul »

I hope everyone does things like you and not have a bad feeling about it when someone suggests a possible improvement. I appreciate your effort. Good luck with other improvements.
Daniel
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

QED wrote:
Vratko Polák wrote:As for 'fail high' condition, also true. I will drop (!isCheck && ss->eval >= beta - IIDMargin) condition from IID in the next test.
In the test the original stockfish lost, but with LOS only 39:60, that means the patch was probably weak for a SMRC version. It was only a quick patch, now a polished version is being tested.
I have tested something similar in these days, namely to allow IID also when in check and I didn't got any increment either.
QED wrote: I am thinking about moving my attention to other areas, maybe move ordering or even evaluation, and it may happen that I will not have a promising patch ready for tommorow testing. So, if anyone has an interesting patch, I may test it.
Nice, improving of move ordering would be great. Thanks for your work it is very interesting. Perhaps not very easy to follow for me becasue I am used to test changes one by one, independently, and not as an aggregate, but I think this is mainly a problem of mine ;-)
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Marco Costalba wrote:Nice, improving of move ordering would be great. Thanks for your work it is very interesting. Perhaps not very easy to follow for me becasue I am used to test changes one by one, independently, and not as an aggregate, but I think this is mainly a problem of mine ;-)
Testing changes one by one is good when you have a lot of unused cycles, comparing to the rate you produce patches. Not for me, as I am an active correspondence player currently on a vacation (from my regular job).

By the way, the 'polished' patch is still weak. In the meantime, I had another 'reduction cancellation' idea, now it is being tested. I will polish the polished patch even more, and maybe test it, if my move ordering ideas continues to look weak.

@Daniel: Thank you!
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.