Stockfish 1.8 tweaks

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

QED wrote:I will publish all my patches here in this thread.

First, I like makefile to compile fast and not to ask about architecture:

Code: Select all

diff -dur src-ja/Makefile src/Makefile
--- src-ja/Makefile     2010-06-30 03:19:38.000000000 +0200
+++ src/Makefile        2010-07-06 13:19:13.000000000 +0200
@@ -299,7 +299,7 @@
 ### ==========================================================================
 
 default:
-       $(MAKE) ARCH=$(ARCH) COMP=$(COMP) build
+       $(MAKE) ARCH=x86-32 COMP=gcc build
 
 help:
        @echo ""
@@ -408,7 +408,7 @@
        -strip $(BINDIR)/$(EXE)
 
 clean:
-       $(RM) $(EXE) *.o .depend *~ core bench.txt
+       $(RM) $(EXE) *.o .depend *~ core bench.txt *.gcda
 
 testrun:
        @$(PGOBENCH)
Joona noticed that with your patch it's impossible to build anything else than x86 32-bit binary.

And anyway, Joona feels (and I agree) that it's better to force people to make the right choice than build 32-bit binary by default (which works but is very slow in 64-bit systems).
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

QED wrote: Anyway, I have another patch and test to publish.
I have looked deeper at your work. Let me say it is very ambituous :-) It is already difficult to tune a single parameter in search and you are trying to tune the whole null search + verification in one go.

I am really interested to look closely how your tests proceed....
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Marco Costalba wrote:It is already difficult to tune a single parameter in search and you are trying to tune the whole null search + verification in one go.

I am really interested to look closely how your tests proceed....
I was tuning them both because they seem to interact. And now I think there should be actually 6 parameters: multiplier, shift and divisor, for both null and verification separately. I think that depth / 3 is too much for verification, because of recursivity.

Anyway, my halfply adaptive tinapa lost rather badly to vanilla stockfish (+186 =580 -234, LOS=4:95). It might be another case of triangle nontransivity (or a noise with bad luck as usual), or Tinapa was already tuned for testpositions, so it is good for sudden resolutions but bad for slow buildup. Tinapa vs stockfish was my second test in the script, but I left a typo there, so no result yet.

Makefile tweak, it is only for convenience of anybody that has 32bit hardware. And for example "make ARCH=x86-64 profile-build" still does what is told. Maybe I should do

Code: Select all

### 3.1 Selecting architecture (default = x86-32)
ifeq ($(ARCH),)
        ARCH=x86-32
endif
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Stockfish 1.8 tweaks

Post by mcostalba »

QED wrote: I was tuning them both because they seem to interact. And now I think there should be actually 6 parameters: multiplier, shift and divisor, for both null and verification separately.
Jus an hint: more parameters you try to tune at the same time and more the test noise level increases. It is already difficult to tune one, two is very very difficult, 6 seems proibitive...even with a cluster.

We have given a try to Tinapa, as expected the result seems much lower then error bar (less then 5 ELO point).

I am also testing your interesting idea regarding zugzwang verification, in a couple of days I hope to have some (good) result.

I prefer to proceed step by step because I tried the multi-parameters approach in the past but it turned out too difficult for me.
Look

Re: Stockfish 1.8 tweaks

Post by Look »

I prefer to proceed step by step because I tried the multi-parameters approach in the past but it turned out too difficult for me.
How exactly are you tuning parameters?
User avatar
Houdini
Posts: 1471
Joined: Tue Mar 16, 2010 12:00 am

Re: Stockfish 1.8 tweaks

Post by Houdini »

Doesn't the current formula for R (and also the proposed modification) rely on the value of OnePly being 2?
In other words, instead of

Code: Select all

int R = 3 + (depth >= 5 * OnePly ? depth / 8 : 0);
I suggest

Code: Select all

int R = 3 + (depth >= 5 * OnePly ? depth / (4 * OnePly) : 0); 
Robert
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Marco Costalba wrote:Jus an hint: more parameters you try to tune at the same time and more the test noise level increases. It is already difficult to tune one, two is very very difficult, 6 seems proibitive...even with a cluster.
Now I realized that "to tune" has a specific meaning of scientific and maybe even automatic search for optimal values. Therefore the thing I was doing was not "tuning", but merely "guessing". And the function to be optimalized was not LOS (or elo), just my happines with what the PV of testposition analysis looks like. Testposition is the one published by me in this thread, and I was doing "go nodes 50000000" on single thread. That is why I introduced uci options, to make it easier for someone else to tune them their way.
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
Look

Re: Stockfish 1.8 tweaks

Post by Look »

[...]
That is why I introduced uci options, to make it easier for someone else to tune them their way.
Can you upload the whole source code for this modification somewhere? That way I could give it a try for tuning, no promise though. :)
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

I was ill, my notebook did not like high temperatures, and I was low on time in my correspondence games. I decided to move testing matches to another computer, so now I have a new signature. This machine had old version of Qt libraries, so cutechess-cli was not working there. As a quick fix I have run cutechess on remote machine, but then I got worried about randomness from net lag. I changed time control to reach approximately the same number of nodes as on notebook and then I added increment to reduce time scramble problems. Anyway, I ran two matches. Results were quite different, so I upgraded Qt, and ran two matches again with local cutechess-cli, and they were even more different.

The matches were between original stockfish and adaptive Tinapa. As Robert pointed out, we probably should not assume OnePly==Depth(2), so I implemented adaptive Tinapa in another way:

Code: Select all

diff -dur src-Ch/search.cpp src-TiAvCh/search.cpp
--- src-Ch/search.cpp   2010-07-09 13:04:18.000000000 +0200
+++ src-TiAvCh/search.cpp       2010-07-15 16:04:09.000000000 +0200
@@ -1185,7 +1185,7 @@
         ss->currentMove = MOVE_NULL;

         // Null move dynamic reduction based on depth
-        int R = 3 + (depth >= 5 * OnePly ? depth / 8 : 0);
+        int R = 3 + (depth > 4*OnePly ? (depth - 3*OnePly/2) / (3*OnePly) : 0); // inspired by Tinapa 1.01

         // Null move dynamic reduction based on value
         if (refinedValue - beta > PawnValueMidgame)
@@ -1206,11 +1206,11 @@
                 nullValue = beta;

             // Do zugzwang verification search at high depths
-            if &#40;depth < 6 * OnePly&#41;
+            if &#40;depth-R*OnePly < OnePly&#41;
                 return nullValue;

             ss->skipNullMove = true;
-            Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-5*OnePly, ply&#41;;
+            Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-R*OnePly, ply&#41;;
             ss->skipNullMove = false;

             if &#40;v >= beta&#41;
and this needs to define division of two Depth-s resulting in an int:

Code: Select all

diff -dur src-Ch/depth.h src-TiAvCh/depth.h
--- src-Ch/depth.h      2010-07-09 13&#58;03&#58;34.000000000 +0200
+++ src-TiAvCh/depth.h  2010-07-15 16&#58;02&#58;43.000000000 +0200
@@ -55,6 +55,7 @@
 inline Depth operator* &#40;int i, Depth d&#41; &#123; return Depth&#40;int&#40;d&#41; * i&#41;; &#125;
 inline void operator*= &#40;Depth &d, int i&#41; &#123; d = Depth&#40;int&#40;d&#41; * i&#41;; &#125;
 inline Depth operator/ &#40;Depth d, int i&#41; &#123; return Depth&#40;int&#40;d&#41; / i&#41;; &#125;
+inline int   operator/ &#40;Depth d1, Depth d2&#41; &#123; return int&#40;d1&#41;/int&#40;d2&#41;; &#125;
 inline void operator/= &#40;Depth &d, int i&#41; &#123; d = Depth&#40;int&#40;d&#41; / i&#41;; &#125;
So what were the results? From the point of view of original stockfish: +199 =609 -192 (LOS=57:42), +212 =573 -215 (46:53), +177 =628 -195 (26:73) and +204 =614 -182 (76:23).

What is the moral of this story? Bullet games are very sensitive to computer timing issues, so matches are not repeatable. If bayeselo says LOS=80:19, it still means almost nothing, as repeated match could end the other way. And we still do not know whether adaptive Tinapa is better or worse.
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.
QED
Posts: 60
Joined: Thu Nov 05, 2009 9:53 pm

Re: Stockfish 1.8 tweaks

Post by QED »

Adam Kleng wrote:Can you upload the whole source code for this modification somewhere? That way I could give it a try for tuning, no promise though. :)
I can, but I am not sure if I want to. It would look like a release, and I am not expert on GPL or related things. I can study wikipedia and think about it deeply, but maybe it would be easier if you applied the patch by yourself.

In the meantime, I made a crude attempt to make a patch with ucioptions, halfply precision reductions AND two stages of null move search. Fist stage is scout&reduce, the second is null&verify. This time I was not able to guess anything nontrivial and working. Anyway, here is the patch (search.cpp and ucioption.cpp are affected) for anyone to read and maybe even tune.

Code: Select all

diff -dur src-Ch/search.cpp src-Avs2Ch/search.cpp
--- src-Ch/search.cpp   2010-07-11 14&#58;58&#58;47.000000000 +0200
+++ src-Avs2Ch/search.cpp       2010-07-11 20&#58;29&#58;17.000000000 +0200
@@ -175,6 +175,11 @@
 
   // Step 8. Null move search with verification search
 
+  int NullScoutMultiplier, NullScoutShift, NullScoutDivisor;
+  int ResearchMultiplier, ResearchShift, ResearchDivisor;
+  int NullSearchMultiplier, NullSearchShift, NullSearchDivisor;
+  int VerificationMultiplier, VerificationShift, VerificationDivisor;
+
   // Null move margin. A null move search will not be done if the static
   // evaluation of the position is more than NullMoveMargin below beta.
   const Value NullMoveMargin = Value&#40;0x200&#41;;
@@ -458,6 +463,19 @@
   if &#40;button_was_pressed&#40;"Clear Hash") || button_was_pressed&#40;"New Game"))
       TT.clear&#40;);
 
+  NullScoutMultiplier    = get_option_value_int&#40;"Null Scout Reduction Multiplier");
+  NullScoutShift         = get_option_value_int&#40;"Null Scout Reduction Shift &#40;halfplies&#41;");
+  NullScoutDivisor       = get_option_value_int&#40;"Null Scout Reduction Divisor");
+  ResearchMultiplier     = get_option_value_int&#40;"Research Reduction Multiplier");
+  ResearchShift          = get_option_value_int&#40;"Research Reduction Shift &#40;halfplies&#41;");
+  ResearchDivisor        = get_option_value_int&#40;"Research Reduction Divisor");
+  NullSearchMultiplier   = get_option_value_int&#40;"Null Search Reduction Multiplier");
+  NullSearchShift        = get_option_value_int&#40;"Null Search Reduction Shift &#40;halfplies&#41;");
+  NullSearchDivisor      = get_option_value_int&#40;"Null Search Reduction Divisor");
+  VerificationMultiplier = get_option_value_int&#40;"Verification Reduction Multiplier");
+  VerificationShift      = get_option_value_int&#40;"Verification Reduction Shift &#40;halfplies&#41;");
+  VerificationDivisor    = get_option_value_int&#40;"Verification Reduction Divisor");
+
   CheckExtension&#91;1&#93;         = Depth&#40;get_option_value_int&#40;"Check Extension &#40;PV nodes&#41;"));
   CheckExtension&#91;0&#93;         = Depth&#40;get_option_value_int&#40;"Check Extension &#40;non-PV nodes&#41;"));
   SingleEvasionExtension&#91;1&#93; = Depth&#40;get_option_value_int&#40;"Single Evasion Extension &#40;PV nodes&#41;"));
@@ -1174,64 +1192,146 @@
     // When we jump directly to qsearch&#40;) we do a null move only if static value is
     // at least beta. Otherwise we do a null move if static value is not more than
     // NullMoveMargin under beta.
+
     if (   !PvNode
         && !ss->skipNullMove
         &&  depth > OnePly
-        &&  refinedValue >= beta - &#40;depth >= 4 * OnePly ? NullMoveMargin &#58; 0&#41;
+        &&  refinedValue >= beta - NullMoveMargin
         && !isCheck
         && !value_is_mate&#40;beta&#41;
         &&  pos.non_pawn_material&#40;pos.side_to_move&#40;)))
     &#123;
+        // Null scout dynamic reduction based on value
+        int R2v = 0; // in halfplies &#40;regardless of OnePly&#41;
+        if &#40;refinedValue - beta > PawnValueMidgame / 3&#41;
+            R2v++;
+        if &#40;refinedValue - beta > PawnValueMidgame&#41;
+            R2v++;
+        if &#40;refinedValue - beta > PawnValueMidgame * 3&#41;
+            R2v++;
+        if &#40;refinedValue - beta > PawnValueMidgame * 9&#41;
+            R2v++;
+
         ss->currentMove = MOVE_NULL;
 
+        // First, nullscout&research thinking.
         // Null move dynamic reduction based on depth
-        int R = 3 + &#40;depth >= 5 * OnePly ? depth / 8 &#58; 0&#41;;
-
-        // Null move dynamic reduction based on value
-        if &#40;refinedValue - beta > PawnValueMidgame&#41;
-            R++;
+        int R2 = int&#40;depth&#41;*2/int&#40;OnePly&#41;; // R2 is reduction in halfplies
+        R2 *= NullScoutMultiplier;
+        R2 += NullScoutShift*int&#40;OnePly&#41;/2;
+        R2 /= NullScoutDivisor;
+        R2 += R2v;
 
         pos.do_null_move&#40;st&#41;;
         &#40;ss+1&#41;->skipNullMove = true;
 
-        nullValue = depth-R*OnePly < OnePly ? -qsearch<NonPV>&#40;pos, ss+1, -beta, -alpha, Depth&#40;0&#41;, ply+1&#41;
-                                            &#58; - search<NonPV>&#40;pos, ss+1, -beta, -alpha, depth-R*OnePly, ply+1&#41;;
+        nullValue = depth-R2*OnePly/2 < OnePly ? -qsearch<NonPV>&#40;pos, ss+1, -beta, -alpha, Depth&#40;0&#41;, ply+1&#41;
+                                               &#58; - search<NonPV>&#40;pos, ss+1, -beta, -alpha, depth-R2*OnePly/2, ply+1&#41;;
         &#40;ss+1&#41;->skipNullMove = false;
         pos.undo_null_move&#40;);
 
         if &#40;nullValue >= beta&#41;
         &#123;
+            // Scout says we can afford to do a nullmove.
+            // That is good enough to try moderately reduced search.
+
             // Do not return unproven mate scores
             if &#40;nullValue >= value_mate_in&#40;PLY_MAX&#41;)
                 nullValue = beta;
 
+            // Now we compute R2 for reduction of reduced search
+            R2  = int&#40;depth&#41;*2/int&#40;OnePly&#41;;
+            R2 *= VerificationMultiplier;
+            R2 += VerificationShift*int&#40;OnePly&#41;/2;
+            R2 /= VerificationDivisor;
+            R2 += R2v;
+
             // Do zugzwang verification search at high depths
-            if &#40;depth < 6 * OnePly&#41;
+            if &#40;depth-R2*OnePly/2 < OnePly&#41;
                 return nullValue;
 
             ss->skipNullMove = true;
-            Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-5*OnePly, ply&#41;;
+            Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-R2*OnePly/2, ply&#41;;
             ss->skipNullMove = false;
 
             if &#40;v >= beta&#41;
                 return nullValue;
+                // I believe v is better bound here, but let us be conservative.
+
+            // If we are here, reduced search failed low so we must enter move loop.
         &#125;
         else
         &#123;
-            // The null move failed low, which means that we may be faced with
-            // some kind of threat. If the previous move was reduced, check if
-            // the move that refuted the null move was somehow connected to the
-            // move which was reduced. If a connection is found, return a fail
-            // low score &#40;which will cause the reduced move to fail high in the
-            // parent node, which will trigger a re-search with full depth&#41;.
-            if &#40;nullValue == value_mated_in&#40;ply + 2&#41;)
-                mateThreat = true;
+            // Nullscout did not succeed.
+            // Either we really can not afford doing a null move here,
+            // or perhaps our inevitable avantage needs higher depths to show up.
+            // So we swith to nullsearch&verification thinking.
+            // Now we compute R2 for reduction of null search
+            R2  = int&#40;depth&#41;*2/int&#40;OnePly&#41;;
+            R2 *= NullSearchMultiplier;
+            R2 += NullSearchShift*int&#40;OnePly&#41;/2;
+            R2 /= NullSearchDivisor;
+            R2 += R2v;
 
-            ss->threatMove = &#40;ss+1&#41;->currentMove;
-            if (   depth < ThreatDepth
-                && &#40;ss-1&#41;->reduction
-                && connected_moves&#40;pos, &#40;ss-1&#41;->currentMove, ss->threatMove&#41;)
-                return beta - 1;
+            pos.do_null_move&#40;st&#41;;
+            &#40;ss+1&#41;->skipNullMove = true;
+
+            nullValue = depth-R2*OnePly/2 < OnePly ? -qsearch<NonPV>&#40;pos, ss+1, -beta, -alpha, Depth&#40;0&#41;, ply+1&#41;
+                                                   &#58; - search<NonPV>&#40;pos, ss+1, -beta, -alpha, depth-R2*OnePly/2, ply+1&#41;;
+            &#40;ss+1&#41;->skipNullMove = false;
+            pos.undo_null_move&#40;);
+
+            if &#40;nullValue >= beta&#41;
+            &#123;
+                // So it looks we actually can afford a null move after all.
+                // We still have to verify to not fall to shallow zugzwangs,
+                // and this verification cannot be too shallow,
+                // because there was something that disturbed null scout.
+
+                // Do not return unproven mate scores
+                if &#40;nullValue >= value_mate_in&#40;PLY_MAX&#41;)
+                    nullValue = beta;
+
+                // Now we compute R2 for reduction of verification search
+                R2  = int&#40;depth&#41;*2/int&#40;OnePly&#41;;
+                R2 *= VerificationMultiplier;
+                R2 += VerificationShift*int&#40;OnePly&#41;/2;
+                R2 /= VerificationDivisor;
+                R2 += R2v;
+
+                // Do zugzwang verification search at high depths
+                if &#40;depth-R2*OnePly/2 < OnePly&#41;
+                    return nullValue;
+
+                ss->skipNullMove = true;
+                Value v = search<NonPV>&#40;pos, ss, alpha, beta, depth-R2*OnePly/2, ply&#41;;
+                ss->skipNullMove = false;
+
+                if &#40;v >= beta&#41;
+                    return nullValue;
+                    // v would be a worse bound here.
+
+                // We have found a zugzwang here.
+                // Or we may need higher depths to realize our advantage.
+                // Anyway, we proceed to move loop.
+            &#125;
+            else
+            &#123;
+                // The null move failed low again, which means that we may be faced with
+                // some kind of threat. If the previous move was reduced, check if
+                // the move that refuted the null move was somehow connected to the
+                // move which was reduced. If a connection is found, return a fail
+                // low score &#40;which will cause the reduced move to fail high in the
+                // parent node, which will trigger a re-search with full depth&#41;.
+                if &#40;nullValue == value_mated_in&#40;ply + 2&#41;)
+                    mateThreat = true;
+
+                ss->threatMove = &#40;ss+1&#41;->currentMove;
+                if (   depth < ThreatDepth
+                    && &#40;ss-1&#41;->reduction
+                    && connected_moves&#40;pos, &#40;ss-1&#41;->currentMove, ss->threatMove&#41;)
+                    return beta - 1;
+            &#125;
         &#125;
     &#125;
 
diff -dur src-Ch/ucioption.cpp src-Avs2Ch/ucioption.cpp
--- src-Ch/ucioption.cpp        2010-07-11 14&#58;58&#58;47.000000000 +0200
+++ src-Avs2Ch/ucioption.cpp    2010-07-11 21&#58;02&#58;37.000000000 +0200
@@ -101,6 +101,18 @@
     o&#91;"Passed Pawn Extension &#40;non-PV nodes&#41;"&#93; = Option&#40;0, 0, 2&#41;;
     o&#91;"Pawn Endgame Extension &#40;PV nodes&#41;"&#93; = Option&#40;2, 0, 2&#41;;
     o&#91;"Pawn Endgame Extension &#40;non-PV nodes&#41;"&#93; = Option&#40;2, 0, 2&#41;;
+    o&#91;"Null Scout Reduction Multiplier"&#93; = Option&#40;1, 0, 20&#41;;
+    o&#91;"Null Scout Reduction Shift &#40;halfplies&#41;"&#93; = Option&#40;14, 0, 200&#41;;
+    o&#91;"Null Scout Reduction Divisor"&#93; = Option&#40;3, 1, 100&#41;;
+    o&#91;"Research Reduction Multiplier"&#93; = Option&#40;1, 0, 20&#41;;
+    o&#91;"Research Reduction Shift &#40;halfplies&#41;"&#93; = Option&#40;14, 0, 200&#41;;
+    o&#91;"Research Reduction Divisor"&#93; = Option&#40;6, 1, 100&#41;;
+    o&#91;"Null Search Reduction Multiplier"&#93; = Option&#40;0, 0, 20&#41;;
+    o&#91;"Null Search Reduction Shift &#40;halfplies&#41;"&#93; = Option&#40;6, 0, 200&#41;;
+    o&#91;"Null Search Reduction Divisor"&#93; = Option&#40;1, 1, 100&#41;;
+    o&#91;"Verification Reduction Multiplier"&#93; = Option&#40;0, 0, 20&#41;;
+    o&#91;"Verification Reduction Shift &#40;halfplies&#41;"&#93; = Option&#40;10, 0, 200&#41;;
+    o&#91;"Verification Reduction Divisor"&#93; = Option&#40;1, 1, 100&#41;;
     o&#91;"Randomness"&#93; = Option&#40;0, 0, 10&#41;;
     o&#91;"Minimum Split Depth"&#93; = Option&#40;4, 4, 7&#41;;
     o&#91;"Maximum Number of Threads per Split Point"&#93; = Option&#40;5, 4, 8&#41;; 
Testing conditions:
tc=/0:40+.1 option.Threads=1 option.Hash=32 option.Ponder=false -pgnin gaviota-starters.pgn -concurrency 1 -repeat -games 1000
hash clear between games
make build ARCH=x86-64 COMP=gcc
around 680kps on 1 thread at startposition.