Lazy SMP and "lazy cluster" experiments

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
petero2
Posts: 580
Joined: Mon Apr 19, 2010 5:07 pm
Location: Sweden
Contact:

ELO measurements

Post by petero2 » Sun Aug 06, 2017 9:17 pm

Time doubling vs cores doubling vs cluster nodes doubling

To see how well the algorithm scales, a series of test matches were played where the used hardware resources were successively doubled. Three different ways of doubling hardware resources were tested:
* Doubling the thinking time.
* Doubling the number of cores on a single cluster node.
* Doubling the number of cluster nodes.

The matches were played either on identical "E5-2686 v4 @ 2.30GHz" EC2 nodes, on my 24 core computer (E5-2680 v3 @ 2.50GHz), or on my 16 core computer (E5-2670 0 @ 2.60GHz). Turbo boost was enabled during all tests. The transposition table size was 128MB. The EC2 instances, which were used for all cluster tests, are connected by 10 Gigabit Ethernet and the ping time between nodes is around 90 microseconds.

Matches are run using a slightly modified version of cutechess-cli. The modification uses thread affinities to lock each game to a subset of the available cores. This gives more reliable results, especially on NUMA hardware when running multiple games in parallel.

The results can be summarized by the following table, which shows the elo increase when successively doubling thinking time (row 1), number of cores (row 2), and number of cluster nodes (row 3).

Code: Select all

config   X1   X2   X4   X8  X16  X32
tXc1n1    -  112  100   85   72   63
t1cXn1    -   72   76   73   66   44
t1c1nX    -   50   46   51   29   --
A configuration tAcBnC means the time control is A times the base time control, the number of cores per cluster node is B, and the number of cluster nodes is C. The base time control was 24s/game+0.24s/move. Each entry in the table shows the elo increase compared to the configuration immediately to the left of the entry. For example, the value 100 in the X4 column in row 1 means that when playing 4*base_time vs 2*base_time on a single core and a single cluster node, the elo increase is 100.

The total elo increase between 1*base_time and 32*base_time is: 112+100+85+72+63 = 432.
The total elo increase between 1 core and 32 cores is: 72+76+73+66+44 = 331.
The total elo increase between 1 cluster node and 16 nodes is: 50+46+51+29 = 176.

The 32 node cluster test is missing because my current Amazon instance limit does not let me rent 32 computers at the same time. I might try to get this limit increased in the future.

The 32 core configuration is the only one that was running on more than one NUMA node. The 16 core configuration was run on an EC2 instance having 16 cores per NUMA node so it only used one NUMA node.

Each match was typically 1000 or 2000 games long, and the estimated error in the results was between 4 and 6 elo, one standard deviation. More details about the matches are given in the following table.

Code: Select all

              dElo  sDev  win loss draw  depth1 depth2
 2t vs  1t  +112.0   4.5  778  153 1075   17.65  16.34
 4t vs  2t  +100.0   4.3  690  130 1180   19.12  17.81
 8t vs  4t   +85.4   4.3  615  133 1252   20.58  19.29 
16t vs  8t   +72.1   4.1  552  143 1305   21.90  20.65
32t vs 16t   +62.7   3.9  480  123 1397   23.21  22.05
                   
 2c vs  1c   +72.2   4.5  604  194 1202   17.19  16.68
 4c vs  2c   +76.4   4.5  642  209 1149   17.74  17.07
 8c vs  4c   +73.2   4.3  577  162 1261   18.71  18.00
16c vs  8c   +65.7   5.7  260   73  667   19.89  19.17
32c vs 16c   +44.0   5.3  203   77  720   20.55  19.97
                   
 2n vs  1n   +50.0   6.5  260  117  623   16.68  16.55
 4n vs  2n   +45.8   6.0  252  121  627   17.12  16.71
 8n vs  4n   +51.1   6.1  251  105  644   17.46  17.08
16n vs  8n   +28.6   6.1  203  121  676   18.09  17.70
The estimated standard deviation was computed by grouping game results in pairs of two (pentanomial distribution) since each opening was repeated with colors reversed.

depth1 and depth2 are the arithmetic mean value of the search depth taken over all moves an engine configuration played in the match. Note that this value depends to a non-trivial amount on the number of moves the engine was searching in very simple endgame positions where texel's maximum search depth (100 ply) was reached. There may be better ways to compute a representative average search depth from the PGN data.

Other cluster tests

Some test results using more than one core per cluster node:

Code: Select all

                    dElo  sDev  win loss draw  depth1 depth2
t1c4n4  vs t1c4n1  +92.8   5.8  316   55  629   18.78  17.97
t1c32n2 vs t1c32n1 +39.2   5.6  217  102  705   20.73  20.17
t1c4n2  vs t1c4n1  +37.7   5.7  221  113  666   18.75  18.34

t1c2n2  vs t1c2n1  +31.4   6.7  228  138  634   17.74  17.43
t1c2n4  vs t1c2n2  +42.2   6.1  237  116  647   18.09  17.66
t1c2n8  vs t1c2n4  +31.4   5.7  203  113  684   18.49  18.19
t1c2n16 vs t1c2n8  +32.1   5.7  206  114  680   18.97  18.58
The last four rows can be used to extend the summary table above with one more row:

Code: Select all

config   X1   X2   X4   X8  X16  X32
tXc1n1    -  112  100   85   72   63
t1cXn1    -   72   76   73   66   44
t1c1nX    -   50   46   51   29   --
t1c2nX    -   31   42   31   32   --
Varying transposition table size

With the given hardware texel fills the transposition table at around 16MB/s per thread. This means that the fixed 128MB hash table used in the previous tests is too small to hold the entire search tree when the thinking time, number of cores, or number of cluster nodes is large. To see if this affected the results, some tests were repeated with a 512MB hash table.

Code: Select all

                    dElo  sDev  win loss draw  depth1 depth2
t8c1n1  vs t4c1n1  +97.1   4.2  654  109 1237   20.57  19.24
t16c1n1 vs t8c1n1  +83.4   4.2  604  133 1263   21.89  20.60
t32c1n1 vs t16c1n1 +72.4   4.0  538  127 1335   23.18  21.89
Some tests were also repeated with a 1MB transposition table, which is too small to hold the whole search tree for all configurations.

Code: Select all

                    dElo  sDev  win loss draw  depth1 depth2
t2c1n1 vs t1c1n1  +103.3   4.1  875  182 1343   16.90  15.82
t4c1n1 vs t2c1n1   +77.5   4.4  619  180 1201   18.06  17.02
t8c1n1 vs t4c1n1   +77.0   4.3  603  167 1230   19.11  18.08

t1c2n1 vs t1c1n1   +48.6   4.4  544  266 1190   16.35  16.11
t1c4n1 vs t1c2n1   +38.8   3.1  953  508 2539   16.89  16.48
t1c8n1 vs t1c4n1   +46.3   4.2  465  200 1335   17.52  17.07
Using the above data the summary table can be extended:

Code: Select all

config   X1   X2   X4   X8  X16  X32
tXc1n1    -  112  100   85   72   63
tXc1n1'   -             97   83   72     // 512MB hash
tXc1n1''  -  103   78   77               // 1MB hash
t1cXn1    -   72   76   73   66   44
t1cXn1''  -   49   39   46               // 1MB hash
t1c1nX    -   50   46   51   29   --
t1c2nX    -   31   42   31   32   --
It can be seen that using a too small transposition table hurts more when doubling the number of cores than when doubling the thinking time.

Possible NUMA issues

As mentioned above the 32 core configuration was the only one running on more than one NUMA node. The result 32 cores vs 16 cores (+44 elo) was also somewhat lower than expected. To further test NUMA behavior a 16 core vs 8 core match was played on the 16 core computer (which has two NUMA nodes) and a 24 core vs 12 core match was played on the 24 core computer (which also has two NUMA nodes.)

Code: Select all

                    dElo  sDev  win loss draw  depth1 depth2
t1c16n1 vs t1c8n1  +44.4   5.6  219   92  689   19.62  18.95    // 128MB hash
t1c16n1 vs t1c8n1  +48.3   6.1  240  102  658   19.68  19.00    // 1024MB hash
t1c24n1 vs t1c12n1 +41.5   5.9  211   92  697   20.12  19.58    // 128MB hash
t1c24n1 vs t1c12n1 +47.5   6.0  229   93  678   20.39  19.80    // 1024MB hash
The elo increase is similar to the earlier 32 cores vs 16 cores match, and the 16 cores vs 8 cores result (+44.4 elo) is lower than the earlier non-NUMA 16 cores vs 8 cores result (+66 elo).

Test data

All games played for these measurements are available in this archive file.

The tests used the texel development version 1.07a27, available for download here.

Turbo boost effects

Turbo boost was enabled for all tests. This makes the computers run faster than the base clock frequency, but the speed increase is larger when fewer cores are in use. This can potentially skew the results in favor for configurations using few cores, i.e. it can look like the SMP algorithm scales worse than it would scale on hypothetical hardware that runs at the same clock speed regardless of how many cores are in use.

All used computers have very good cooling systems, so the actual turbo boost frequency can be calculated from processor turbo boost specification information as explained here. The following table shows the turbo boost value as a function on the number of active cores:

Code: Select all

             18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1   base    incr
E5-2686 v4 :  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4  5  7  7   2.3GHz  0.1GHz
E5-2680 v3 :                    4  4  4  4  4  4  4  4  5  6  8  8   2.5GHz  0.1GHz
E5-2670 0  :                                4  4  5  5  6  6  7  7   2.6GHz  0.1GHz
As an example of the worst case, consider a match played between 1 core and 12 cores on the E5-2680 computer. When the 1 core engine is thinking only one core is active so the clock frequency is 2.5+8*0.1 = 3.3GHz. When the 12 core engine is thinking all 12 cores are active so the clock frequency is 2.5+4*0.1 = 2.9GHz. In this case it would be as if the 12 core engine had a 12% time handicap (1-2.9/3.3), so if we wanted to measure the true algorithmic 12 core vs 1 core improvement, we would have to give the 12 core engine 12% more thinking time.

However, these tests were run in a way that does not trigger the worst case, since only 2*N vs 1*N core matches were played (for various values of N), and as many parallel games as possible were played on each test computer. For example, consider a 4 core vs 2 cores match played on the E5-2680 computer. If only one game were to be played in parallel, the speed difference would be 1-(2.5+5*0.1)/(2.5+8*0.1)=9%. However since only 4 cores are needed for each game, 3 games are played in parallel. This means that at least 2*3 cores are active at any time, so it can be seen from the table above that the turbo boost multiplier is always 4, and hence the speed difference is zero.

I think the largest difference that actually occurred in the test data is 8 cores vs 4 cores played on the E5-2670 computer. In this case the speed difference is 1-(2.6+4*0.1)/(2.6+6*0.1)=6%.

petero2
Posts: 580
Joined: Mon Apr 19, 2010 5:07 pm
Location: Sweden
Contact:

Possible improvements

Post by petero2 » Sun Aug 06, 2017 9:17 pm

For large clusters the cluster nodes are often connected in a regular structure, such as a 3D or 5D hyper torus. If the tree of cluster nodes created by the algorithm was created in a way that takes the cluster connectivity graph into account, the communication could be more efficient. For example a breadth first search starting from node 0 could be used to construct the tree. It is unknown how much this would help though, since large clusters also often have faster interconnects than Ethernet, so maybe it is already fast enough without this optimization.

The elo measurements showed that when the transposition table is not large enough to contain the entire search tree, multi core searches suffer more than increased time searches. It might be possible to improve the hash replacement strategy to improve both multi core and single core performance for the too small hash table case.

The tests where more than one NUMA node was used showed worse performance than expected. Texel tries to allocate the entire transposition table on NUMA node 0 and threads running on other NUMA nodes skip the hash table probe if the remaining search depth is 0. This logic is inherited from the old texel SMP algorithm, where measurements showed that it gave a noticeable reduction in the number of remote memory accesses. It is possible that this logic is bad for elo though, in which case it should be removed.

The elo measurements showed that adding more cluster nodes gave less elo improvement than adding the same amount of additional cores in the SMP case. There are at least four cluster overheads compared to SMP:
* The main thread on each cluster node has to handle the MPI communication.
* The main thread on each cluster node has to handle the insertion of hash table entries into the node local hash table.
* The delay between generating a hash table entry and the entry being available for all search threads is larger.
* Hash table entries that have a too small depth are not shared between cluster nodes.
Measurements could be performed to determine the size of these effects. It might be possible to reduce the overheads, for example by offloading some of the work to a special communication thread.

The extra depth used by helper threads is currently computed as "threadNumber % 2". If there is a large number of helper threads it might be better to let some of them search at even larger depths.

Finally, tests have shown that when running on 16 cores on a single computer, the lazy SMP algorithm is around 12 elo weaker than texel's old SMP algorithm. It is not obvious how to extend the old algorithm to clusters, but it is possible that some sort of generalization of that algorithm could perform better on clusters than the current lazy algorithm does.

Daniel Shawul
Posts: 3724
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by Daniel Shawul » Mon Aug 07, 2017 5:35 pm

Hi Peter,
I did similar experiments in scorpio a few years ago with smp and cluster algorithms. Three algorithms were included in my tests YBW, ABDADA and SHT(shared hash table or lazy algorithm) for both smp and cluster algorithms with the default being YBW for both. Ofcourse on the cluster, you have the additional issue of a distributed/local transposition table that is fundamental to the performance of the last two algorithms. I think the depth of transposition table entries shared between nodes was too high for my tests because of the slow interconnect, but maybe ABDADA would be a better default option for the cluster on a faster interconnect. My SHT was pretty lazy in that, I don't do different depth searches with different threads/processes. To this day, I feel like playing with root search depths is a kludge and ABDADA is the right algorithm for an algorithm that relies only on shared transposition table. Note that ABDADA was actually first proposed as a distributed algorithm, so is YBW as a matter of fact.
Daniel

petero2
Posts: 580
Joined: Mon Apr 19, 2010 5:07 pm
Location: Sweden
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by petero2 » Mon Aug 07, 2017 7:06 pm

Daniel Shawul wrote:Hi Peter,
I did similar experiments in scorpio a few years ago with smp and cluster algorithms. Three algorithms were included in my tests YBW, ABDADA and SHT(shared hash table or lazy algorithm) for both smp and cluster algorithms with the default being YBW for both. Ofcourse on the cluster, you have the additional issue of a distributed/local transposition table that is fundamental to the performance of the last two algorithms. I think the depth of transposition table entries shared between nodes was too high for my tests because of the slow interconnect, but maybe ABDADA would be a better default option for the cluster on a faster interconnect. My SHT was pretty lazy in that, I don't do different depth searches with different threads/processes. To this day, I feel like playing with root search depths is a kludge and ABDADA is the right algorithm for an algorithm that relies only on shared transposition table. Note that ABDADA was actually first proposed as a distributed algorithm, so is YBW as a matter of fact.
Daniel
Hi Daniel,

I actually read a lot of old interesting posts from you for inspiration while working on texel's cluster algorithm. I remember reading that you had trouble getting good performance because of overhead caused by waiting for responses from other cluster nodes. I wanted to avoid that and therefore only considered algorithms where no node ever has to wait for an answer from any other node during search.

Lazy SMP is easy to implement in a totally asynchronous way, so I went with that. I am not completely happy that I lose around 12 elo on 16 core SMP compared to the old SMP algorithm though. For the time being lazy cluster easily beats non-lazy SMP though, since cluster makes it possible to use many more cores, so that is what I use in HGM's monthly tournaments.

It is possible that ABDADA could work well even if hash table updates are not atomic. I think I read somewhere that ABDADA can be made fault tolerant by not using an "nproc" counter, but instead just use a boolean "visited" flag in the hash table. If it is fault tolerant I think the worst that could happen is that it falls back to lazy SMP behavior if several threads/nodes modify the "visited" flag concurrently. It could be interesting to test this.

Regarding minimum sharing depth, I did a quick and dirty measurement of that for the root node when running on two single threaded cluster nodes connected by 1 gigabit Ethernet (ping time around 0.2ms). The average minimum sharing depth when searching in the start position was 1.04, distributed like this:

Code: Select all

min depth   % of time
0              7.04
1             82.21
2             10.73
3              0.02
I plan to do a more accurate measurement later in several different cluster configurations.

My long term goal is to figure out an algorithm for a distributed asynchronous approximate priority queue, which I could use as a building block when generalizing the old texel SMP algorithm to work in a cluster.

D Sceviour
Posts: 443
Joined: Mon Jul 20, 2015 3:06 pm
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by D Sceviour » Mon Aug 07, 2017 10:19 pm

petero2 wrote:It is possible that ABDADA could work well even if hash table updates are not atomic. I think I read somewhere that ABDADA can be made fault tolerant by not using an "nproc" counter, but instead just use a boolean "visited" flag in the hash table. If it is fault tolerant I think the worst that could happen is that it falls back to lazy SMP behavior if several threads/nodes modify the "visited" flag concurrently. It could be interesting to test this.
Interesting. I was playing around with something like this today. Normally, hash depth > current node depth would indicate that the node has already been searched and therefore can return safely with the stored hash value. However, if a stored hash flag said it was "busy" being searched by another thread, then what should be done? If the current thread continues then it is just duplicating another thread. If the current thread wants to avoid this, then what hash value should be returned?

jdart
Posts: 3804
Joined: Fri Mar 10, 2006 4:23 am
Location: http://www.arasanchess.org

Re: Lazy SMP and "lazy cluster" experiments

Post by jdart » Mon Aug 07, 2017 10:46 pm

Last time I tried ABDADA it was a big loser, but that was a long time ago and not on modern hardware.

I am kind of afraid to touch my SMP implementation because it works very reliably and it took a long time to get it there. But it doesn't scale as well as I would like. I have this on my to-do list to look at but it is not at the top of the list.

Thanks Peter for your very detailed writeup.

--Jon

petero2
Posts: 580
Joined: Mon Apr 19, 2010 5:07 pm
Location: Sweden
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by petero2 » Tue Aug 08, 2017 6:14 pm

D Sceviour wrote:
petero2 wrote:It is possible that ABDADA could work well even if hash table updates are not atomic. I think I read somewhere that ABDADA can be made fault tolerant by not using an "nproc" counter, but instead just use a boolean "visited" flag in the hash table. If it is fault tolerant I think the worst that could happen is that it falls back to lazy SMP behavior if several threads/nodes modify the "visited" flag concurrently. It could be interesting to test this.
Interesting. I was playing around with something like this today. Normally, hash depth > current node depth would indicate that the node has already been searched and therefore can return safely with the stored hash value. However, if a stored hash flag said it was "busy" being searched by another thread, then what should be done? If the current thread continues then it is just duplicating another thread. If the current thread wants to avoid this, then what hash value should be returned?
The idea was that the busy flag is only a hint that affects in what order moves are searched, so if there is a conflict between the busy flag and the result already stored in a TT entry, the busy flag is ignored.

Something like this (untested):

Code: Select all

int abdada(Position pos, int alpha, int beta, int depth, bool exclusive) {
    // Check for abort
    // Repetition check

    TTEntry& e = tt.probe(pos);
    if (e.cutOff(alpha, beta, depth))
        return e.score;
    if (exclusive && e.busy)
        return BUSY;
    e.busy = true;

    // null move, forward pruning, etc

    MoveList moves = generateMoves(pos);
    bool searched[256];
    int nextDepth[256];
    for &#40;int pass = 0; pass < 2; pass++) &#123;
        for &#40;int i = 0; i < moves.size&#40;); i++) &#123;
            if &#40;pass == 0&#41;
                nextDepth&#91;i&#93; = depth - 1 + extension - reduction;
            else if &#40;searched&#91;i&#93;)
                continue;
            bool nextExclusive = pass == 0 && i > 0;
            pos.makeMove&#40;moves&#91;i&#93;);
            int score = -abdada&#40;pos, -beta, -alpha, nextDepth&#91;i&#93;, nextExclusive&#41;;
            pos.unMakeMove&#40;moves&#91;i&#93;);
            searched&#91;i&#93; = score != -BUSY;

            // Update alpha and bestScore, break if done
        &#125;
    &#125;

    tt.store&#40;pos, bestScore&#41;; // Also sets entry.busy = false;
    return bestScore;
&#125;
The idea is that maintaining sequential consistency across cluster nodes is too expensive, so instead have an algorithm that works even if e.busy is sometimes wrong. When it is wrong several threads could search the same subtree simultaneously, possibly causing search overhead. The hope is that this still contributes to elo increase, like in lazy SMP, but there is a risk that it does not work well when the "depth + 1" trick from lazy SMP is not used.

I don't know if this algorithm would be better than lazy SMP, or if it can be combined with the "depth + 1" idea from lazy SMP. It would have to be tested by playing games.

D Sceviour
Posts: 443
Joined: Mon Jul 20, 2015 3:06 pm
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by D Sceviour » Tue Aug 08, 2017 11:07 pm

petero2 wrote:The idea is that maintaining sequential consistency across cluster nodes is too expensive, so instead have an algorithm that works even if e.busy is sometimes wrong. When it is wrong several threads could search the same subtree simultaneously, possibly causing search overhead. The hope is that this still contributes to elo increase, like in lazy SMP, but there is a risk that it does not work well when the "depth + 1" trick from lazy SMP is not used.
The phrase "even if e.busy is sometimes wrong" sounds like a recipe for disaster. The algorithm would still have to wait somewhere to make sure the "searched(i)" moves do not return a busy signal. The abdada search can only complete after all those busy nodes are finished, otherwise it may return an incorrect value.

Maybe try something strange like if alpha>hash_value then return hash_value [untested]. This ensures the previous ply will fail-high on the search. The final score for ply-1 would be on the safe side. A refutation of the "busy" node by another thread should verify the case condition when it is complete.

Daniel Shawul
Posts: 3724
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by Daniel Shawul » Wed Aug 09, 2017 3:48 pm

I remember reading that you had trouble getting good performance because of overhead caused by waiting for responses from other cluster nodes.
Note sure about that. There is not a lot of waiting in YBW waiting for response from other nodes; messages are processed asynchronously via MPI_IProbe(), and can even be processede as they arrive if the hardware allows it. IIRC the BG/Q has for instance a dedicated 17th core for that purpose. If you allow in any thread to process MPI messages you have a lot more complexity.

Ofcourse the standard YBW has some waiting to do for the last processor to finish, and for the first move to be searched fully before parallelization. But those have barely any effect on performance and the effect is the same for both smp and cluster machines. The problem with the cluster is the split depth is often much larger than that for smp due to often slow interconnect. Other than that the cluster YBW implementation almost gave me the same speedup as the smp implementation both running on an smp implementation.
Lazy SMP is easy to implement in a totally asynchronous way, so I went with that. I am not completely happy that I lose around 12 elo on 16 core SMP compared to the old SMP algorithm though. For the time being lazy cluster easily beats non-lazy SMP though, since cluster makes it possible to use many more cores, so that is what I use in HGM's monthly tournaments.
What I don't like about lazy is that ABDADA has a work sharing mechanism inside TT while lazy fully relies on the threads searching different parts of the tree by adhoc techinques like using different root depth, differnt root move ordering, or intentionally starting one thread slower than the other (just came up with this one now). Why not do all of this on top of ABDADA ? Should be able to perform better. My lazy smp just shares the TT without using such tricks and served as a baseline for performance of YBW and ABDADA.

Have you done multi-node tests for your cluster implementation ? I am sure loosing just 12 elo on an smp machine is something you should be happy about, but performance dips heavily when you start adding in more nodes. On an SMP machine, using pure MPI mode should actually be equivalent to smp mode because message passing mode could be done more efficiently by just copying data on a shared memory.

I think with the simple approach that you have for the cluster the distributed TT scheme has more impact than anything else. My opinion is that if YBW performed better than Lazy on SMP, it should perform better also on the cluster too. Asynchronous schemes often have way too much overhead, but given that we have way too much power on the cluster who knows which algorithm works better for it.

Thanks for the tests,
Daniel

petero2
Posts: 580
Joined: Mon Apr 19, 2010 5:07 pm
Location: Sweden
Contact:

Re: Lazy SMP and "lazy cluster" experiments

Post by petero2 » Wed Aug 09, 2017 5:07 pm

D Sceviour wrote:
petero2 wrote:The idea is that maintaining sequential consistency across cluster nodes is too expensive, so instead have an algorithm that works even if e.busy is sometimes wrong. When it is wrong several threads could search the same subtree simultaneously, possibly causing search overhead. The hope is that this still contributes to elo increase, like in lazy SMP, but there is a risk that it does not work well when the "depth + 1" trick from lazy SMP is not used.
The phrase "even if e.busy is sometimes wrong" sounds like a recipe for disaster. The algorithm would still have to wait somewhere to make sure the "searched(i)" moves do not return a busy signal. The abdada search can only complete after all those busy nodes are finished, otherwise it may return an incorrect value.
Actually no thread has to wait for any other thread. If you look at my pseudo code it can be seen that all moves are searched exactly once (except in the case of a fail high, which stops the move loop immediately). The first pass searches move 0 and all moves where the busy flag indicates that nobody else is likely to be searching that move. The second pass searches all remaining moves (that's what the searched[] array is used for). If somebody else is already searching the same move in the second pass, that information is ignored and the thread will search the move itself anyway.

Therefore you will get "valid" scores returned by all threads, regardless of what other threads are doing. By "valid" I mean that the returned score is based on searching all possible moves, so it is for example not possible to "forget" to search the only move that recaptures a queen.

Whether or not this will actually cause stronger play than a single threaded search has to be tested. In the worst case the busy flag is wrong all the time and there is none of the "widening" effect that has been observed in lazy SMP.

I plan to test this algorithm and report how it performs.

Actually, reading the pseudo code again, I realize that it might not be clear that alpha and bestScore should only be updated if the move was actually searched, like this:

Code: Select all

int approximateAbdada&#40;Position pos, int alpha, int beta, int depth, bool exclusive&#41; &#123;
    // Check for abort
    // Repetition check

    TTEntry& e = tt.probe&#40;pos&#41;;
    if &#40;e.cutOff&#40;alpha, beta, depth&#41;)
        return e.score;
    if &#40;exclusive && e.busy&#41;
        return BUSY;
    e.busy = true;

    // null move, forward pruning, etc

    MoveList moves = generateMoves&#40;pos&#41;;
    bool searched&#91;256&#93;;
    int nextDepth&#91;256&#93;;
    for &#40;int pass = 0; pass < 2; pass++) &#123;
        for &#40;int i = 0; i < moves.size&#40;); i++) &#123;
            if &#40;pass == 0&#41;
                nextDepth&#91;i&#93; = depth - 1 + extension - reduction;
            else if &#40;searched&#91;i&#93;)
                continue;
            bool nextExclusive = pass == 0 && i > 0;
            pos.makeMove&#40;moves&#91;i&#93;);
            int score = -approximateAbdada&#40;pos, -beta, -alpha, nextDepth&#91;i&#93;, nextExclusive&#41;;
            pos.unMakeMove&#40;moves&#91;i&#93;);
            searched&#91;i&#93; = score != -BUSY;
            if (!searched&#91;i&#93;)               // <------------------------------------- Missing in previous post
                continue;

            // Update alpha and bestScore, break if done
        &#125;
    &#125;

    tt.store&#40;pos, bestScore&#41;; // Also sets entry.busy = false;
    return bestScore;
&#125;

Post Reply