Shared hash table smp result

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Daniel Shawul
Posts: 3762
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Shared hash table smp result

Post by Daniel Shawul » Thu Mar 21, 2013 11:19 pm

I finally implemented splitting at the root which made the code uglier since everything has to be in one search() function. Well I was curios about speedup of a simplest smp implementation that was talked about here some time ago. The result is not pretty even though my implementation is a little better than what was tested before. I just want to get ball park figures so i did not do the test thoroughly but you can get the picture from the result below. I took 30 positions and run scorpio to fixed depth of 17. The nPS scaling I got are 1.27,1.59 and 1.67 for 2,4,8 cpus resp. And Time scaling is 1.3,1.42,and 1.42. The first move takes way more than the other moves so unless you do iterative splits starting from the simplest PV-slit , and then maybe YBW etc, there is no way you would get numbers close to 1.7.

Code: Select all

		2cpu				4cpu				8cpu	
	Overhead	TimeScale	NpsScale		Overhead	TimeScale	NpsScale		Overhead	TimeScale	NpsScale
1	0.8691	1.2955	1.1256		1.1163	0.9974	1.1133		0.7631	1.5103	1.1519
2	1.2338	0.8596	1.0597		1.1555	1.0158	1.1740		0.8817	1.2657	1.1150
3	0.4672	2.2320	1.0428		0.7620	1.4228	1.0838		0.8854	1.2454	1.1024
4	1.1927	0.8969	1.0698		1.3574	0.8392	1.1396		1.1733	0.9225	1.0825
5	0.7027	1.7888	1.2557		1.1749	1.6567	1.9451		0.8468	1.9715	1.6680
6	0.7419	2.0501	1.5208		0.9921	2.2069	2.1900		0.7738	2.2692	1.7552
7	1.7148	0.6819	1.1692		1.0740	1.4376	1.5438		1.2228	1.6300	1.9930
8	0.9605	1.2761	1.2240		0.8433	1.6270	1.3702		0.8428	1.7150	1.4459
9	0.5916	1.8416	1.0893		0.7937	1.4140	1.1223		0.7468	1.5352	1.1464
10	1.3528	0.7736	1.0465		1.1873	0.9371	1.1127		1.4664	0.7225	1.0595
11	0.9058	1.3997	1.2677		1.4171	1.1360	1.6093		2.5366	0.8381	2.1259
12	0.9555	1.1926	1.1405		0.9644	1.2783	1.2338		1.4269	0.8427	1.2036
13	1.1643	1.2700	1.4788		1.4150	1.5100	2.1369		1.1920	2.3328	2.7810
14	0.7108	1.9206	1.3646		0.6669	2.3789	1.5878		0.8207	2.0796	1.7063
15	1.3975	0.8810	1.2322		1.3521	1.0471	1.4169		1.2786	1.2254	1.5675
16	1.0541	1.2808	1.3509		1.0024	1.7576	1.7623		1.9946	0.8388	1.6726
17	0.9512	1.1315	1.0762		1.1885	1.0995	1.3074		1.1776	1.1551	1.3607
18	1.7233	1.0859	1.8708		1.5643	1.6020	2.5059		2.6301	1.3538	3.5606
19	0.8552	1.9349	1.6561		1.0330	2.5979	2.6872		1.4987	1.9051	2.8571
20	0.9824	1.2052	1.1830		1.5294	1.2467	1.9053		0.9694	1.3697	1.3267
21	1.2847	1.0436	1.3408		1.1642	1.2854	1.4983		1.3634	1.1722	1.6003
22	1.0238	1.0558	1.0810		1.3459	0.8059	1.0847		1.2303	0.9067	1.1155
23	0.7007	1.8641	1.3060		1.1247	1.6322	1.8361		0.9451	1.8731	1.7708
24	1.0960	1.0090	1.1066		1.3383	0.9410	1.2598		0.8379	1.4693	1.2301
25	1.9010	0.6023	1.1460		1.9455	0.6974	1.3573		3.2548	0.5270	1.7166
26	1.0732	1.3767	1.4777		1.5533	1.3641	2.1193		1.5193	1.8182	2.7626
27	0.6496	2.0618	1.3390		0.8656	1.9100	1.6523		0.6940	2.0129	1.3975
28	0.9356	1.4298	1.3362		1.0108	1.5363	1.5516		1.1028	1.5333	1.6892
29	2.3943	0.5897	1.4115		1.1957	1.6960	2.0268		1.3354	1.5587	2.0810
30	1.2376	1.0095	1.2495		0.8240	1.6842	1.3878		1.2502	1.0492	1.3118
											
Avg	1.0941	1.3013	1.2673		1.1652	1.4254	1.5907		1.2887	1.4216	1.6786


dchoman
Posts: 171
Joined: Wed Dec 28, 2011 7:44 pm
Location: United States

Re: Shared hash table smp result

Post by dchoman » Fri Mar 22, 2013 2:01 am

Daniel Shawul wrote:I finally implemented splitting at the root which made the code uglier since everything has to be in one search() function. Well I was curios about speedup of a simplest smp implementation that was talked about here some time ago. The result is not pretty even though my implementation is a little better than what was tested before. I just want to get ball park figures so i did not do the test thoroughly but you can get the picture from the result below. I took 30 positions and run scorpio to fixed depth of 17. The nPS scaling I got are 1.27,1.59 and 1.67 for 2,4,8 cpus resp. And Time scaling is 1.3,1.42,and 1.42. The first move takes way more than the other moves so unless you do iterative splits starting from the simplest PV-slit , and then maybe YBW etc, there is no way you would get numbers close to 1.7.

Code: Select all

		2cpu				4cpu				8cpu	
	Overhead	TimeScale	NpsScale		Overhead	TimeScale	NpsScale		Overhead	TimeScale	NpsScale
1	0.8691	1.2955	1.1256		1.1163	0.9974	1.1133		0.7631	1.5103	1.1519
2	1.2338	0.8596	1.0597		1.1555	1.0158	1.1740		0.8817	1.2657	1.1150
3	0.4672	2.2320	1.0428		0.7620	1.4228	1.0838		0.8854	1.2454	1.1024
4	1.1927	0.8969	1.0698		1.3574	0.8392	1.1396		1.1733	0.9225	1.0825
5	0.7027	1.7888	1.2557		1.1749	1.6567	1.9451		0.8468	1.9715	1.6680
6	0.7419	2.0501	1.5208		0.9921	2.2069	2.1900		0.7738	2.2692	1.7552
7	1.7148	0.6819	1.1692		1.0740	1.4376	1.5438		1.2228	1.6300	1.9930
8	0.9605	1.2761	1.2240		0.8433	1.6270	1.3702		0.8428	1.7150	1.4459
9	0.5916	1.8416	1.0893		0.7937	1.4140	1.1223		0.7468	1.5352	1.1464
10	1.3528	0.7736	1.0465		1.1873	0.9371	1.1127		1.4664	0.7225	1.0595
11	0.9058	1.3997	1.2677		1.4171	1.1360	1.6093		2.5366	0.8381	2.1259
12	0.9555	1.1926	1.1405		0.9644	1.2783	1.2338		1.4269	0.8427	1.2036
13	1.1643	1.2700	1.4788		1.4150	1.5100	2.1369		1.1920	2.3328	2.7810
14	0.7108	1.9206	1.3646		0.6669	2.3789	1.5878		0.8207	2.0796	1.7063
15	1.3975	0.8810	1.2322		1.3521	1.0471	1.4169		1.2786	1.2254	1.5675
16	1.0541	1.2808	1.3509		1.0024	1.7576	1.7623		1.9946	0.8388	1.6726
17	0.9512	1.1315	1.0762		1.1885	1.0995	1.3074		1.1776	1.1551	1.3607
18	1.7233	1.0859	1.8708		1.5643	1.6020	2.5059		2.6301	1.3538	3.5606
19	0.8552	1.9349	1.6561		1.0330	2.5979	2.6872		1.4987	1.9051	2.8571
20	0.9824	1.2052	1.1830		1.5294	1.2467	1.9053		0.9694	1.3697	1.3267
21	1.2847	1.0436	1.3408		1.1642	1.2854	1.4983		1.3634	1.1722	1.6003
22	1.0238	1.0558	1.0810		1.3459	0.8059	1.0847		1.2303	0.9067	1.1155
23	0.7007	1.8641	1.3060		1.1247	1.6322	1.8361		0.9451	1.8731	1.7708
24	1.0960	1.0090	1.1066		1.3383	0.9410	1.2598		0.8379	1.4693	1.2301
25	1.9010	0.6023	1.1460		1.9455	0.6974	1.3573		3.2548	0.5270	1.7166
26	1.0732	1.3767	1.4777		1.5533	1.3641	2.1193		1.5193	1.8182	2.7626
27	0.6496	2.0618	1.3390		0.8656	1.9100	1.6523		0.6940	2.0129	1.3975
28	0.9356	1.4298	1.3362		1.0108	1.5363	1.5516		1.1028	1.5333	1.6892
29	2.3943	0.5897	1.4115		1.1957	1.6960	2.0268		1.3354	1.5587	2.0810
30	1.2376	1.0095	1.2495		0.8240	1.6842	1.3878		1.2502	1.0492	1.3118
											
Avg	1.0941	1.3013	1.2673		1.1652	1.4254	1.5907		1.2887	1.4216	1.6786

This is interesting, but I am not sure I understand what the numbers mean. How are overhead, timescale and npsscale measured? I would think that a shared-hash SMP approach would produce perfect NPS scaling, but the time scaling should indeed be worse than YBW.

My current shared hash implementation in EXchess (same as released in v7.01/7.02) has perfect NPS scaling but the time to depth scaling is not great.... Over 317 positions, 2 threads gives a 1.57 time-to-depth improvement, 3 threads gives a 1.94 speed-up, and 4 threads gives a 2.13 speed-up. This is quite a bit worse than YBW, but not negligible. I can't run more cores than that on my machine, but a quick test by Martin Thoresen on his 16 core machine showed some further improvement up to 16 cores (the depth reached was ~2-3 ply deeper than 4 similar cores on my machine in the same time, but only a couple of positions were tested... and results can be quite variable from position to position, so no firm conclusions there).

- Dan

Daniel Shawul
Posts: 3762
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Shared hash table smp result

Post by Daniel Shawul » Fri Mar 22, 2013 4:31 am

This is interesting, but I am not sure I understand what the numbers mean. How are overhead, timescale and npsscale measured? I would think that a shared-hash SMP approach would produce perfect NPS scaling, but the time scaling should indeed be worse than YBW.
Overhead is the search overhead for the parallel search. The parallel search will search a lot of unnecessary nodes that a sequential alpha-beta would have avoided. This is an important parameter so you should measure it. I suspect in your case it will be a lot since you start next ply=d+1 search when you don't have the best move or bounds at ply=d. YBW criterion is meant to reduce this search overhead. If you wait for the first move to finish the search , then your nps scaling will be far from perfect ,as all the other processors will have to wait while the first move is being searched. I think you did this first but you said it failed when you try to start the parallel search immediately without searching the first move.
My current shared hash implementation in EXchess (same as released in v7.01/7.02) has perfect NPS scaling but the time to depth scaling is not great.... Over 317 positions, 2 threads gives a 1.57 time-to-depth improvement, 3 threads gives a 1.94 speed-up, and 4 threads gives a 2.13 speed-up.
You can have perfect scaling by doing lot of useless work, so that is not very much interesting. Starting the next depth search is one way to keep processors busy but i guarantee the search overhead will be a lot more. You can see that in my case the average overhead is close to 1 in all cases due to the use of YBW. Other methods in the past have all failed so if anything is making the difference it is searching simultaneously at two depth. If that indeed improves your scaling, it probably means your sequential search better use an increment of 2 for iterative deepening. Also If you look at the 'history' of parallel search, you would find many methods that had perfect scaling : parallel aspiration window, parallel tactical/knowledge search all done from the root etc.. The one that succeeded was Root split-> PV split -> YBW, that tries to minimize the search overhead. Alpha-beta with ID is a very hard to beat algorithm when it comes to searching the minimum tree.

I think you should do your test again with larger depth (12 seems to be very small compared for your engine) along with report of search overhead. The latter is important because sometimes the parallel search can get a much smaller tree by chance which could make it seem it searched fast when in reality it searched a smaller tree. You can see some of those cases in my results too. So i suggest you first do the test where all the processors are at the same depth, and report the search overhead as well. Then we can look at the effect of simultaneously searching two depths.
This is quite a bit worse than YBW, but not negligible. I can't run more cores than that on my machine, but a quick test by Martin Thoresen on his 16 core machine showed some further improvement up to 16 cores (the depth reached was ~2-3 ply deeper than 4 similar cores on my machine in the same time, but only a couple of positions were tested... and results can be quite variable from position to position, so no firm conclusions there).

dchoman
Posts: 171
Joined: Wed Dec 28, 2011 7:44 pm
Location: United States

Re: Shared hash table smp result

Post by dchoman » Fri Mar 22, 2013 10:27 am

Daniel Shawul wrote:
This is interesting, but I am not sure I understand what the numbers mean. How are overhead, timescale and npsscale measured? I would think that a shared-hash SMP approach would produce perfect NPS scaling, but the time scaling should indeed be worse than YBW.
Overhead is the search overhead for the parallel search. The parallel search will search a lot of unnecessary nodes that a sequential alpha-beta would have avoided. This is an important parameter so you should measure it. I suspect in your case it will be a lot since you start next ply=d+1 search when you don't have the best move or bounds at ply=d. YBW criterion is meant to reduce this search overhead. If you wait for the first move to finish the search , then your nps scaling will be far from perfect ,as all the other processors will have to wait while the first move is being searched. I think you did this first but you said it failed when you try to start the parallel search immediately without searching the first move.
Indeed, that did fail, so my current search starts the first move in each thread simultaneously as I originally described. This is why my NPS scaling is perfect. I report that not because perfect NPS scaling is something to be proud of... all that matters is the time to complete a depth... but because if you don't have perfect (or near perfect) NPS scaling, you are not executing the same algorithm. Waiting for the first move to finish in one thread without also searching it in the other threads simultaneously throws away a lot of useful work the other processors can be doing. Indeed, there will be even more useless work, and I am in no way suggesting that this is better than a full YBW implementation, but for a 'lazy smp' approach it is not terrible.

While I understand what overhead is, your description still does not tell me how you measure it. It looks like it may be a ratio, but I am not certain which two numbers are divided.

Regarding doing the test at depths > 12. Yes, that is an excellent idea. I've run quick tests at depths 20 and 25 for much smaller numbers of positions ~10. As I recall, they are about the same as the depth 12 results on average, but the results were highly dependent on which positions I chose... this is what led me to so many positions done quickly. I'll find some reasonable compromise of depth and # of positions and run again tonight. I'll also try it with all threads at the same depth, as well as some at depth + 2, for comparison as I think those are excellent ideas as well.

- Dan

Daniel Shawul
Posts: 3762
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Shared hash table smp result

Post by Daniel Shawul » Fri Mar 22, 2013 2:13 pm

The Overhead is a ratio of total nodes searched. All the columns are ratios compared to the sequential search so 1 indicates no overhead,improvement etc... Here is the raw data for 1,2,4,8 cpus resp. I just took ratios of the first 3 columns to get those numbers.

Code: Select all

Nodes	Time	NPS	splits	bad
=====	====	===	======	===
17295750	15.39	1124122	0	0
7787521	6.43	1211499	0	0
16593840	12.99	1277530	0	0
18710717	12.26	1525661	0	0
10304734	8.3	1241384	0	0
25351863	22.51	1126299	0	0
22843769	19.25	1186689	0	0
9598877	6.98	1375788	0	0
53262557	37.09	1435880	0	0
45629694	33.24	1372610	0	0
23292851	18.63	1250421	0	0
10204817	8.36	1220088	0	0
73863825	63.92	1155548	0	0
16023972	13.06	1227044	0	0
6682790	5.11	1307530	0	0
9585076	8.12	1180573	0	0
13494091	10.5	1284784	0	0
21920724	17.83	1229567	0	0
9210428	7.43	1240127	0	0
8038042	6.52	1233396	0	0
7456049	5.99	1244333	0	0
14040388	10.59	1326066	0	0
33664902	27.16	1239366	0	0
7880394	6.7	1176353	0	0
4370861	3.71	1177177	0	0
19006124	14.8	1284371	0	0
14173757	9.34	1517857	0	0
8207965	8.05	1020256	0	0
11311286	9.43	1199754	0	0
11523376	9.6	1200351	0	0

Code: Select all

Nodes	Time	NPS	splits	bad
=====	====	===	======	===
15031194	11.88	1265358	14	1
9608006	7.48	1283806	14	1
7752093	5.82	1332203	14	1
22316384	13.67	1632149	13	0
7240679	4.64	1558811	14	1
18809437	10.98	1712907	15	2
39172620	28.23	1387427	14	1
9219954	5.47	1684009	15	2
31508008	20.14	1564138	14	1
61727459	42.97	1436424	14	1
21098457	13.31	1585158	14	1
9750316	7.01	1391510	14	1
85998682	50.33	1708866	15	2
11389372	6.8	1674415	16	3
9339427	5.8	1611079	14	1
10103554	6.34	1594878	13	0
12835147	9.28	1382650	14	1
37775391	16.42	2300291	17	4
7876328	3.84	2053801	13	0
7896842	5.41	1459135	14	1
9578473	5.74	1668432	14	1
14374106	10.03	1433540	15	2
23589745	14.57	1618618	13	0
8636813	6.64	1301705	15	2
8309102	6.16	1349099	15	2
20396948	10.75	1897920	16	3
9206687	4.53	2032381	15	2
7679486	5.63	1363303	14	1
27082649	15.99	1693512	15	2
14261518	9.51	1499791	13	0

Code: Select all

Nodes	Time	NPS	splits	bad
=====	====	===	======	===
19306397	15.43	1251468	15	2
8998764	6.33	1422279	14	1
12645124	9.13	1384553	14	1
25396998	14.61	1738686	13	0
12107069	5.01	2414652	16	3
25151393	10.2	2466548	15	2
24534850	13.39	1832052	13	0
8094789	4.29	1885139	14	1
42272481	26.23	1611485	15	2
54174340	35.47	1527285	14	1
33007480	16.4	2012283	15	2
9841966	6.54	1505348	14	1
104514718	42.33	2469337	14	1
10686470	5.49	1948308	16	3
9035488	4.88	1852673	14	1
9607812	4.62	2080513	13	0
16038138	9.55	1679737	13	0
34289839	11.13	3081125	19	6
9514096	2.86	3332433	13	0
12293077	5.23	2350043	14	1
8680463	4.66	1864360	14	1
18896789	13.14	1438330	15	2
37862228	16.64	2275647	13	0
10546014	7.12	1482014	16	3
8503485	5.32	1597798	15	2
29522611	10.85	2721981	16	3
12268621	4.89	2507894	15	2
8296805	5.24	1583057	13	0
13524662	5.56	2431618	14	1
9495067	5.7	1665801	13	0

Code: Select all

Nodes	Time	NPS	splits	bad
=====	====	===	======	===
13199237	10.19	1294931	14	1
6866304	5.08	1350836	14	1
14691631	10.43	1408323	15	2
21953164	13.29	1651483	14	1
8725622	4.21	2070626	15	2
19616397	9.92	1976861	15	2
27933359	11.81	2365029	13	0
8090167	4.07	1989222	14	1
39775438	24.16	1646062	13	0
66911782	46.01	1454224	15	2
59085231	22.23	2658264	15	2
14561679	9.92	1468503	15	2
88047157	27.4	3213634	14	1
13150273	6.28	2093659	15	2
8544778	4.17	2049598	14	1
19117973	9.68	1974589	13	0
15890913	9.09	1748175	13	0
57654517	13.17	4378048	15	2
13803941	3.9	3543106	13	0
7792221	4.76	1636333	14	1
10165715	5.11	1991325	14	1
17273964	11.68	1479188	15	2
31815531	14.5	2194628	13	0
6602656	4.56	1446998	16	3
14226196	7.04	2020766	15	2
28875374	8.14	3548215	15	2
9836319	4.64	2121267	13	0
9051351	5.25	1723410	13	0
15104760	6.05	2496654	14	1
14406213	9.15	1574621	13	0

Daniel Shawul
Posts: 3762
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Shared hash table smp result

Post by Daniel Shawul » Fri Mar 22, 2013 2:21 pm

Also i should note that what is commonly referred as 'Shared hash table' approach does not split the move list. Processors share work on the first root move too via hash table communications. The algorithm 'ABDADA' IIRC does work sharing through hash table. So that is actually a better algorithm. With only root split + YBW, the picture is grim.

dchoman
Posts: 171
Joined: Wed Dec 28, 2011 7:44 pm
Location: United States

Depth = 20 Results

Post by dchoman » Sat Mar 23, 2013 12:05 am

I've now tested 100 positions to a depth of 20 for a variety of cases. The results are below

Each of the following four cases uses the default setup of EXchess...

Threads 1 & 2 search the nominal depth. Threads 3 & 4 search nominal depth + 1.

==> smp_vtest1_1 core <==
Total NPS = 635114 <depth> = 20 <time to depth> = 8.24
==> smp_vtest1_2 cores <==
Total NPS = 1290624 <depth> = 20 <time to depth> = 5.28
==> smp_vtest1_3 cores <==
Total NPS = 1952499 <depth> = 20 <time to depth> = 4.24
==> smp_vtest1_4 cores <==
Total NPS = 2629987 <depth> = 20 <time to depth> = 3.50

So we get the following ratios compared to 1 Thread.

2 cores: NPS_ratio = 2.03 Time_to_depth_speedup = 1.56
3 cores: NPS_ratio = 3.07 Time_to_depth_speedup = 1.94
4 cores: NPS_ratio = 4.14 Time_to_depth_speedup = 2.35

It is interesting that the NPS_ratios are slightly larger than perfect NPS scaling. I suspect this is due to additional hits in the eval cache.

The overhead, as you define it is the ratio of total nodes searched to complete this fixed depth, so this would just be the ratio of the NPS_ratio and the Time_to_depth_speedup. So the overheads are...

2 cores: Overhead = 1.30
3 cores: Overhead = 1.58
4 cores: Overhead = 1.76

My machine only has 4 real cores, but I thought it would be interesting to try the 8 logical cores by running an 8 threaded version. In principle, each of theese logical cores should be 0.5 times the speed of the real cores. Here is the results I got...

Threads 1-4 are as above, threads 5 & 6 search nominal depth + 2, threads 7 & 8 search nominal depth + 3

==> smp_vtest1_8 fake cores <==
Total NPS = 3407011 <depth> = 20.1 <time to depth> = 3.81

Not sure why the NPS is larger than the 4 core case, but perhaps the CPUs are more idle than I thought even when running 4 threads. If I scale the NPS up to be twice the 4 core case (as if I had 8 real cores), then the <time to depth> = 2.47, which is a Time_to_depth_speedup of 3.33. Of course, this is really speculation, as I don't have 8 real cores, but it is in line with the trends given for 2, 3, and 4 cores. The overall speedup scaling seems to be something like (1.5)^(log_2(N)) where N = number of cores.

I also tried the two cases you suggest:

4 Threads: all at nominal depth

==> smp_vtest2_4 cores <==
Total NPS = 2600541 <depth> = 20 <time to depth> = 3.59

4 Threads: 1 & 2 at nominal depth, 3 & 4 at nominal depth + 2

==> smp_vtest3_4 cores <==
Total NPS = 2615016 <depth> = 20.1 <time to depth> = 3.6

These are each slightly worse than my default parameters above, but it is not clear if this difference is significant.

- Dan

Daniel Shawul
Posts: 3762
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Depth = 20 Results

Post by Daniel Shawul » Sat Mar 23, 2013 2:39 am

I prefer you tested the case where all processors are searching at the same depth + YBW first so that we can compare with what I have now. You did this first right? Then we can relax the cirteria droping YBW ,then finally searching simultaneously at depth,depth+1,depth+2 etc.. Right now I have too many questions. Is it better to search at d,d+1,d+2 or at the same depth with no YBW? What is the effect of tight aspiration windows? I use (-10,10) for instance so a parallel searcher that did not have the proper score will have a lot of fail highs/fail low? You have the score at depth=d but you search d,d+1,d+2 simultaneously with it. So many factors to look at.

Anyway I have quickly implemented the case where all search at the same depth and the first move is also searched in parallel. I did not verify it so will check tomorrow. Here the YBW principle is violated and you will see that it is very crucial. Even though I have better nps figures, the result is worse than what I had with the YBW criterion. One thing that you may have forgotten to set is that with 4 processors, the first four moves will have an open window. Do you do that ? It is easy to forget and is the reason for the big search overhead.

First the average results for 2 and 4 cpus Overhead,TimeScaling,NpsScaling order

Code: Select all

Avg	1.7050	1.0352	1.5143		2.0713	1.2333	2.0653
The nps scaling is not perfect for my case probably because a processor has to wait for the other to finish once it is done. This is a quick implementation so I may have made a mistake. Meanwhile you should make sure we are doing the same thing concerning the first n_processors moves i.e using open window.

Code: Select all

		2cpu				4cpu	
	Overhead	TimeScale	NpsScale		Overhead	TimeScale	NpsScale
1	1.4493	0.9478	1.3738		2.0455	0.8325	1.7028
2	1.7545	1.0286	1.8019		2.0054	0.9853	1.9759
3	1.4847	0.9000	1.3362		0.6120	1.9821	1.2137
4	2.2458	0.6775	1.5212		3.7242	0.4900	1.8245
5	1.6031	1.0673	1.7103		2.2670	0.9639	2.1873
6	0.6169	2.8133	1.7357		0.9825	3.0546	2.9998
7	1.7864	0.8775	1.5676		1.3043	1.5733	2.0520
8	1.2163	1.2229	1.4877		2.4259	0.7250	1.7583
9	2.3960	0.5700	1.3655		1.9732	0.9282	1.8312
10	1.5930	0.9986	1.5906		2.5797	0.7018	1.8102
11	1.6236	0.8681	1.4096		2.2863	0.8190	1.8726
12	2.3724	0.6333	1.5026		3.4391	0.5997	2.0627
13	1.3348	1.3947	1.8616		1.6595	1.9456	3.2284
14	1.3478	1.2177	1.6398		5.3434	0.4491	2.4002
15	1.2727	1.3375	1.7018		1.1436	1.7319	1.9779
16	1.4966	1.0491	1.5694		1.7997	1.0354	1.8633
17	1.0618	1.3179	1.3990		1.1952	1.4155	1.6923
18	1.0153	1.2090	1.2274		1.1543	2.6482	3.0552
19	2.1226	0.5969	1.2677		4.2215	0.4309	1.8198
20	3.8686	0.3154	1.2193		2.0884	1.3118	2.7369
21	0.9938	1.7812	1.7718		2.4892	0.6823	1.6985
22	1.3512	0.9993	1.3504		1.3788	1.1035	1.5213
23	1.8690	0.8664	1.6196		1.2628	1.8881	2.3840
24	1.2052	1.1990	1.4454		1.0806	1.7123	1.8504
25	1.4156	1.0616	1.5031		1.8561	0.8140	1.5105
26	1.7904	0.6889	1.2332		2.2034	1.0318	2.2737
27	2.4591	0.6903	1.6979		3.1127	0.7976	2.4829
28	1.1472	1.3043	1.4961		1.4360	1.2146	1.7440
29	3.6513	0.3620	1.3217		2.1473	1.2601	2.7050
30	1.6046	1.0604	1.7010		0.9215	1.8709	1.7225
							
Avg	1.7050	1.0352	1.5143		2.0713	1.2333	2.0653
And the raw data

Code: Select all

Nodes	Time	NPS	splits	bad		Nodes	Time	NPS	splits	bad		Nodes	Time	NPS	splits	bad
=====	====	===	======	===		=====	====	===	======	===		=====	====	===	======	===
20386073	17.99	1133378	0	0		29544767	18.98	1557036	27	8		41698829	21.61	1929875	26	9
5647209	4.68	1207442	0	0		9908219	4.55	2175718	19	3		11325192	4.75	2385757	21	4
14966292	12.15	1231793	0	0		22221069	13.5	1645883	23	7		9158642	6.13	1495044	23	6
16942236	11.49	1474520	0	0		38049348	16.96	2243079	21	5		63095859	23.45	2690311	17	2
6666788	5.87	1135352	0	0		10687698	5.5	1941805	26	6		15113499	6.09	2483322	28	5
35377863	31.34	1128912	0	0		21824132	11.14	1959430	21	4		34758798	10.26	3386477	21	4
30282384	25.22	1200729	0	0		54096504	28.74	1882272	21	3		39498511	16.03	2463883	19	2
11360424	8.12	1399411	0	0		13817431	6.64	2081879	23	3		27559152	11.2	2460638	27	5
68088376	47.54	1432354	0	0		163140876	83.41	1955820	19	5		134351172	51.22	2622970	17	3
99328103	71.72	1385020	0	0		158229809	71.82	2203052	29	8		256233518	102.2	2507128	30	9
39450426	29.81	1323351	0	0		64050005	34.34	1865389	20	3		90195187	36.4	2478094	18	3
10506243	9.17	1145344	0	0		24925471	14.48	1721015	27	6		36132498	15.29	2362527	30	9
88256314	76.5	1153662	0	0		117801122	54.85	2147617	21	4		146458461	39.32	3724498	22	3
9021775	7.55	1195095	0	0		12159973	6.2	1959705	26	8		48207007	16.81	2868440	30	8
8542011	6.46	1323112	0	0		10871375	4.83	2251734	36	14		9769057	3.73	2616945	37	13
12581112	11.11	1132821	0	0		18829211	10.59	1777850	27	6		22642699	10.73	2110813	25	6
16539566	12.81	1291346	0	0		17561923	9.72	1806596	32	16		19768675	9.05	2185350	29	14
20566791	17.24	1193246	0	0		20881321	14.26	1464533	29	8		23740563	6.51	3645664	26	6
11772574	9.58	1228485	0	0		24988546	16.05	1557306	27	8		49698202	22.23	2235636	26	6
8211129	6.9	1190709	0	0		31765269	21.88	1451794	26	9		17147807	5.26	3258800	23	8
10362426	8.14	1273181	0	0		10297951	4.57	2255849	32	8		25794079	11.93	2162481	30	9
19620099	14.71	1333702	0	0		26510365	14.72	1800975	32	13		27051563	13.33	2028918	35	13
33469704	28.02	1194450	0	0		62555738	32.34	1934494	28	8		42266356	14.84	2847561	29	8
10948132	9.64	1135580	0	0		13194542	8.04	1641316	28	8		11830341	5.63	2101303	30	11
6634771	5.69	1166040	0	0		9392360	5.36	1752632	40	19		12314737	6.99	1761261	41	17
30697138	23.98	1280114	0	0		54959810	34.81	1578669	28	9		67638512	23.24	2910560	26	7
9129003	5.95	1533513	0	0		22449006	8.62	2603689	15	2		28416116	7.46	3807599	15	1
10367320	8.83	1173703	0	0		11892990	6.77	1755941	31	10		14887384	7.27	2046938	29	11
12130512	10.27	1181390	0	0		44291902	28.37	1561443	26	6		26048153	8.15	3195700	28	8
14578347	12.46	1170199	0	0		23392093	11.75	1990477	21	3		13434085	6.66	2015616	21	4


Daniel Shawul
Posts: 3762
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Depth = 20 Results

Post by Daniel Shawul » Sat Mar 23, 2013 3:11 am

Here is analysis from the start position with 4 processors that shows what I am talking about. The root move lists are printed with their alpha-beta windows and which processor is searching them.
Each of the root moves are printed as follows.

Code: Select all

print&#40;"&#91;%d&#93; %d.%s (%d,%d&#41;\n", processor_id, move_no, move, alpha,beta&#41;
And on the initial position here is the analysis. You can see how the first four moves are searched in parallel with open windows. And also the analysis explains the incurred idle times since in many cases one of the processors can finish zero window moves pretty quickly.

Code: Select all

feature done=0
ht 4194304 X 16 = 64.0 MB
eht 1048576 X 8 = 8.0 MB
pht 32768 X 24 = 0.8 MB
processors &#91;1&#93;
EgbbProbe not Loaded!
loading_time = 0s
mt 4
+ Thread 1 started.
+ Thread 2 started.
+ Thread 3 started.
processors &#91;4&#93;
analyze
5 30 4 710  e2-e3
&#91;0&#93; 0.e2-e3 &#40;9,70&#41;
&#91;1&#93; 1.e2-e4 &#40;9,70&#41;
&#91;3&#93; 3.Nb1-c3 &#40;9,70&#41;
&#91;2&#93; 2.c2-c4 &#40;9,70&#41;
5 30 4 241  e2-e3 d7-d5 Ng1-f3 Bc8-f5 Nb1-c3
&#91;0&#93; 4.d2-d3 &#40;30,31&#41;
&#91;0&#93; 5.d2-d4 &#40;30,31&#41;
&#91;0&#93; 6.Nb1-a3 &#40;30,31&#41;
&#91;0&#93; 7.Ng1-h3 &#40;30,31&#41;
&#91;0&#93; 8.Ng1-f3 &#40;30,31&#41;
&#91;0&#93; 9.c2-c3 &#40;30,31&#41;
&#91;0&#93; 10.g2-g4 &#40;30,31&#41;
&#91;0&#93; 11.b2-b3 &#40;30,31&#41;
&#91;0&#93; 12.f2-f4 &#40;30,31&#41;
&#91;0&#93; 13.b2-b4 &#40;30,31&#41;
&#91;0&#93; 14.h2-h4 &#40;30,31&#41;
&#91;0&#93; 15.a2-a4 &#40;30,31&#41;
&#91;0&#93; 16.g2-g3 &#40;30,31&#41;
&#91;0&#93; 17.a2-a3 &#40;30,31&#41;
&#91;0&#93; 18.h2-h3 &#40;30,31&#41;
&#91;0&#93; 19.f2-f3 &#40;30,31&#41;
&#91;0&#93; 0.e2-e3 &#40;20,40&#41;
&#91;3&#93; 3.Nb1-c3 &#40;20,40&#41;
&#91;2&#93; 2.c2-c4 &#40;20,40&#41;
&#91;1&#93; 1.e2-e4 &#40;20,40&#41;
6 20 5 557  e2-e3
&#91;1&#93; 4.d2-d3 &#40;20,21&#41;
&#91;3&#93; 5.d2-d4 &#40;20,21&#41;
&#91;1&#93; 6.Nb1-a3 &#40;20,21&#41;
&#91;1&#93; 7.Ng1-h3 &#40;20,21&#41;
&#91;1&#93; 8.Ng1-f3 &#40;20,21&#41;
&#91;1&#93; 9.c2-c3 &#40;20,21&#41;
&#91;1&#93; 10.g2-g4 &#40;20,21&#41;
&#91;1&#93; 11.b2-b3 &#40;20,21&#41;
&#91;1&#93; 12.f2-f4 &#40;20,21&#41;
&#91;1&#93; 13.b2-b4 &#40;20,21&#41;
&#91;1&#93; 14.h2-h4 &#40;20,21&#41;
&#91;1&#93; 15.a2-a4 &#40;20,21&#41;
&#91;2&#93; 16.g2-g3 &#40;20,21&#41;
&#91;1&#93; 17.a2-a3 &#40;20,21&#41;
&#91;2&#93; 18.h2-h3 &#40;20,21&#41;
&#91;1&#93; 19.f2-f3 &#40;20,21&#41;
&#91;0&#93; 0.e2-e3 (-20,40&#41;
&#91;1&#93; 1.c2-c4 (-20,40&#41;
&#91;2&#93; 2.e2-e4 (-20,40&#41;
&#91;3&#93; 3.Nb1-c3 (-20,40&#41;
6 -16 6 2866  e2-e3 d7-d5 Nb1-c3 Nb8-c6 Bf1-d3 e7-e5
&#91;0&#93; 4.d2-d4 (-16,-15&#41;
&#91;1&#93; 5.d2-d3 (-16,-15&#41;
6 9 6 2284  Nb1-c3 Ng8-f6 e2-e4 e7-e5 d2-d4 Nb8-c6 d4xe5 Nc6xe5
&#91;3&#93; 6.Ng1-f3 &#40;9,10&#41;
&#91;3&#93; 7.Ng1-h3 &#40;9,10&#41;
&#91;3&#93; 8.Nb1-a3 &#40;9,10&#41;
&#91;3&#93; 9.c2-c3 &#40;9,10&#41;
&#91;3&#93; 10.g2-g4 &#40;9,10&#41;
&#91;3&#93; 11.b2-b3 &#40;9,10&#41;
&#91;3&#93; 12.f2-f4 &#40;9,10&#41;
&#91;3&#93; 13.b2-b4 &#40;9,10&#41;
&#91;3&#93; 14.h2-h4 &#40;9,10&#41;
&#91;3&#93; 15.a2-a4 &#40;9,10&#41;
&#91;3&#93; 16.g2-g3 &#40;9,10&#41;
&#91;3&#93; 17.a2-a3 &#40;9,10&#41;
&#91;3&#93; 18.h2-h3 &#40;9,10&#41;
&#91;3&#93; 19.f2-f3 &#40;9,10&#41;
6 4 7 3935  e2-e4 e7-e5 Ng1-f3 Nb8-c6 d2-d4 Ng8-f6
6 1 7 4884  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-f5
6 -3 7 7499  d2-d3 Ng8-f6 Ng1-f3 d7-d5 Bc1-f4 Bc8-f5
&#91;0&#93; 0.Nb1-c3 (-1,19&#41;
&#91;1&#93; 1.e2-e4 (-1,19&#41;
&#91;2&#93; 2.d2-d3 (-1,19&#41;
&#91;3&#93; 3.e2-e3 (-1,19&#41;
&#91;3&#93; 4.c2-c4 (-1,0&#41;
&#91;3&#93; 5.d2-d4 (-1,0&#41;
7 19 7 1297  d2-d4
&#91;0&#93; 0.d2-d4 (-1,59&#41;
&#91;2&#93; 1.e2-e4 (-1,59&#41;
&#91;1&#93; 2.d2-d3 (-1,59&#41;
&#91;3&#93; 3.e2-e3 (-1,59&#41;
&#91;3&#93; 4.c2-c4 (-1,0&#41;
&#91;3&#93; 5.Nb1-c3 (-1,0&#41;
7 33 8 4069  d2-d3 e7-e5 Nb1-c3 Nb8-c6 Ng1-f3 Bf8-c5 Bc1-g5
&#91;1&#93; 6.Ng1-f3 &#40;33,34&#41;
&#91;1&#93; 7.Nb1-a3 &#40;33,34&#41;
&#91;1&#93; 8.Ng1-h3 &#40;33,34&#41;
&#91;1&#93; 9.b2-b4 &#40;33,34&#41;
&#91;1&#93; 10.g2-g4 &#40;33,34&#41;
&#91;1&#93; 11.c2-c3 &#40;33,34&#41;
7 6 8 4095  e2-e4 d7-d5 Nb1-c3 Ng8-f6 e4xd5 Nf6xd5 Nc3xd5 Qd8xd5 d2-d4
&#91;2&#93; 12.f2-f4 &#40;33,34&#41;
&#91;1&#93; 13.b2-b3 &#40;33,34&#41;
&#91;1&#93; 14.h2-h4 &#40;33,34&#41;
&#91;2&#93; 15.a2-a4 &#40;33,34&#41;
&#91;1&#93; 16.g2-g3 &#40;33,34&#41;
&#91;2&#93; 17.a2-a3 &#40;33,34&#41;
&#91;1&#93; 18.h2-h3 &#40;33,34&#41;
&#91;2&#93; 19.f2-f3 &#40;33,34&#41;
7 46 9 4763  d2-d4 d7-d5 Bc1-f4 Nb8-c6 Nb1-c3 Bc8-f5 Ng1-f3
7 46 9 5462  Nb1-c3 d7-d5 Ng1-f3 Nb8-c6 d2-d4 Bc8-f5 Bc1-f4
&#91;0&#93; 0.d2-d4 &#40;36,56&#41;
&#91;2&#93; 1.e2-e4 &#40;36,56&#41;
&#91;3&#93; 2.d2-d3 &#40;36,56&#41;
&#91;1&#93; 3.Nb1-c3 &#40;36,56&#41;
&#91;1&#93; 4.e2-e3 &#40;36,37&#41;
&#91;1&#93; 5.c2-c4 &#40;36,37&#41;
&#91;3&#93; 6.Ng1-f3 &#40;36,37&#41;
&#91;1&#93; 7.Nb1-a3 &#40;36,37&#41;
&#91;1&#93; 8.Ng1-h3 &#40;36,37&#41;
&#91;1&#93; 9.g2-g4 &#40;36,37&#41;
&#91;1&#93; 10.c2-c3 &#40;36,37&#41;
&#91;1&#93; 11.b2-b4 &#40;36,37&#41;
&#91;1&#93; 12.h2-h4 &#40;36,37&#41;
&#91;1&#93; 13.a2-a4 &#40;36,37&#41;
&#91;1&#93; 14.f2-f4 &#40;36,37&#41;
&#91;1&#93; 15.b2-b3 &#40;36,37&#41;
&#91;3&#93; 16.g2-g3 &#40;36,37&#41;
&#91;1&#93; 17.a2-a3 &#40;36,37&#41;
&#91;3&#93; 18.f2-f3 &#40;36,37&#41;
&#91;1&#93; 19.h2-h3 &#40;36,37&#41;
8 36 9 1145  d2-d4
&#91;0&#93; 0.d2-d4 (-4,56&#41;
&#91;2&#93; 1.e2-e4 (-4,56&#41;
&#91;1&#93; 2.d2-d3 (-4,56&#41;
&#91;3&#93; 3.Nb1-c3 (-4,56&#41;
8 9 10 3925  e2-e4 d7-d5 Nb1-c3 Ng8-f6 e4xd5 Nf6xd5 Ng1-f3 Bc8-f5 d2-d4
&#91;2&#93; 4.e2-e3 &#40;9,10&#41;
8 17 11 5749  Nb1-c3 d7-d5 d2-d4 Ng8-f6 Bc1-f4 Bc8-g4 Ng1-f3 Nb8-c6
&#91;3&#93; 5.c2-c4 &#40;17,18&#41;
&#91;3&#93; 6.Ng1-f3 &#40;17,18&#41;
&#91;2&#93; 7.Nb1-a3 &#40;17,18&#41;
&#91;2&#93; 8.Ng1-h3 &#40;17,18&#41;
&#91;2&#93; 9.g2-g4 &#40;17,18&#41;
&#91;2&#93; 10.c2-c3 &#40;17,18&#41;
&#91;2&#93; 11.b2-b4 &#40;17,18&#41;
&#91;2&#93; 12.h2-h4 &#40;17,18&#41;
&#91;2&#93; 13.a2-a4 &#40;17,18&#41;
&#91;2&#93; 14.f2-f4 &#40;17,18&#41;
&#91;2&#93; 15.b2-b3 &#40;17,18&#41;
&#91;2&#93; 16.g2-g3 &#40;17,18&#41;
&#91;2&#93; 17.a2-a3 &#40;17,18&#41;
&#91;2&#93; 18.f2-f3 &#40;17,18&#41;
&#91;2&#93; 19.h2-h3 &#40;17,18&#41;
8 17 11 7670  d2-d4 d7-d5 Bc1-f4 Ng8-f6 Nb1-c3 Bc8-g4 Ng1-f3 Nb8-c6
&#91;0&#93; 0.Nb1-c3 &#40;7,27&#41;
&#91;1&#93; 1.d2-d3 &#40;7,27&#41;
&#91;3&#93; 2.d2-d4 &#40;7,27&#41;
&#91;2&#93; 3.e2-e4 &#40;7,27&#41;
&#91;1&#93; 4.e2-e3 &#40;7,8&#41;
&#91;1&#93; 5.c2-c4 &#40;7,8&#41;
&#91;1&#93; 6.Ng1-f3 &#40;7,8&#41;
&#91;1&#93; 7.Nb1-a3 &#40;7,8&#41;
&#91;1&#93; 8.Ng1-h3 &#40;7,8&#41;
&#91;2&#93; 9.g2-g4 &#40;7,8&#41;
&#91;1&#93; 10.c2-c3 &#40;7,8&#41;
&#91;2&#93; 11.b2-b4 &#40;7,8&#41;
&#91;1&#93; 12.h2-h4 &#40;7,8&#41;
&#91;2&#93; 13.b2-b3 &#40;7,8&#41;
&#91;1&#93; 14.a2-a4 &#40;7,8&#41;
&#91;2&#93; 15.f2-f4 &#40;7,8&#41;
&#91;1&#93; 16.g2-g3 &#40;7,8&#41;
&#91;2&#93; 17.a2-a3 &#40;7,8&#41;
&#91;1&#93; 18.f2-f3 &#40;7,8&#41;
&#91;1&#93; 19.h2-h3 &#40;7,8&#41;
9 22 13 7933  Nb1-c3 d7-d5 d2-d4 Ng8-f6 Bc1-f4 Nb8-c6 Ng1-f3 Bc8-f5 e2-e3
9 22 13 8309  d2-d4 d7-d5 Bc1-f4 Nb8-c6 Ng1-f3 Bc8-f5 e2-e3 Ng8-f6 Nb1-c3
&#91;0&#93; 0.Nb1-c3 &#40;12,32&#41;
&#91;1&#93; 1.d2-d4 &#40;12,32&#41;
&#91;3&#93; 2.d2-d3 &#40;12,32&#41;
&#91;2&#93; 3.e2-e4 &#40;12,32&#41;
&#91;2&#93; 4.e2-e3 &#40;12,13&#41;
&#91;3&#93; 5.c2-c4 &#40;12,13&#41;
&#91;3&#93; 6.Ng1-f3 &#40;12,13&#41;
&#91;3&#93; 7.Nb1-a3 &#40;12,13&#41;
&#91;3&#93; 8.Ng1-h3 &#40;12,13&#41;
&#91;3&#93; 9.b2-b4 &#40;12,13&#41;
&#91;3&#93; 10.f2-f4 &#40;12,13&#41;
10 12 16 13143  Nb1-c3
&#91;3&#93; 11.c2-c3 &#40;12,13&#41;
&#91;3&#93; 12.h2-h4 &#40;12,13&#41;
&#91;3&#93; 13.g2-g3 &#40;12,13&#41;
&#91;3&#93; 14.b2-b3 &#40;12,13&#41;
&#91;3&#93; 15.g2-g4 &#40;12,13&#41;
&#91;3&#93; 16.a2-a3 &#40;12,13&#41;
&#91;3&#93; 17.h2-h3 &#40;12,13&#41;
&#91;3&#93; 18.a2-a4 &#40;12,13&#41;
&#91;3&#93; 19.f2-f3 &#40;12,13&#41;
10 32 21 36959  e2-e3
&#91;0&#93; 0.e2-e3 &#40;12,72&#41;
&#91;3&#93; 3.d2-d3 &#40;12,72&#41;
&#91;3&#93; 4.e2-e4 &#40;12,13&#41;
&#91;3&#93; 5.c2-c4 &#40;12,13&#41;
&#91;3&#93; 6.Nb1-a3 &#40;12,13&#41;
&#91;3&#93; 7.Ng1-f3 &#40;12,13&#41;
&#91;3&#93; 8.Ng1-h3 &#40;12,13&#41;
&#91;1&#93; 2.Nb1-c3 &#40;12,72&#41;
&#91;3&#93; 9.b2-b4 &#40;12,13&#41;
&#91;3&#93; 10.f2-f4 &#40;12,13&#41;
&#91;3&#93; 11.b2-b3 &#40;12,13&#41;
&#91;3&#93; 12.c2-c3 &#40;12,13&#41;
&#91;3&#93; 13.h2-h4 &#40;12,13&#41;
&#91;3&#93; 14.g2-g4 &#40;12,13&#41;
&#91;3&#93; 15.g2-g3 &#40;12,13&#41;
&#91;3&#93; 16.a2-a3 &#40;12,13&#41;
&#91;3&#93; 17.h2-h3 &#40;12,13&#41;
&#91;3&#93; 18.a2-a4 &#40;12,13&#41;
&#91;3&#93; 19.f2-f3 &#40;12,13&#41;
&#91;2&#93; 1.d2-d4 &#40;12,72&#41;
10 33 28 32017  e2-e3 Nb8-c6 Ng1-f3 e7-e5 Nb1-c3 d7-d5 Bf1-b5 e5-e4 Nf3-d4 Bc8-d7
&#91;0&#93; 0.e2-e3 &#40;23,43&#41;
&#91;2&#93; 3.d2-d3 &#40;23,43&#41;
&#91;2&#93; 4.e2-e4 &#40;23,24&#41;
&#91;1&#93; 1.d2-d4 &#40;23,43&#41;
&#91;3&#93; 2.Nb1-c3 &#40;23,43&#41;
&#91;2&#93; 5.c2-c4 &#40;23,24&#41;
11 23 30 9833  e2-e3
&#91;3&#93; 6.Nb1-a3 &#40;23,24&#41;
&#91;3&#93; 7.Ng1-f3 &#40;23,24&#41;
&#91;2&#93; 8.Ng1-h3 &#40;23,24&#41;
&#91;2&#93; 9.b2-b4 &#40;23,24&#41;
&#91;2&#93; 10.f2-f4 &#40;23,24&#41;
&#91;1&#93; 11.b2-b3 &#40;23,24&#41;
&#91;2&#93; 12.c2-c3 &#40;23,24&#41;
&#91;1&#93; 13.h2-h4 &#40;23,24&#41;
&#91;2&#93; 14.g2-g4 &#40;23,24&#41;
&#91;3&#93; 15.g2-g3 &#40;23,24&#41;
&#91;2&#93; 16.a2-a3 &#40;23,24&#41;
&#91;1&#93; 17.h2-h3 &#40;23,24&#41;
&#91;1&#93; 18.a2-a4 &#40;23,24&#41;
&#91;2&#93; 19.f2-f3 &#40;23,24&#41;
&#91;0&#93; 0.e2-e3 (-17,43&#41;
&#91;2&#93; 1.d2-d4 (-17,43&#41;
&#91;1&#93; 2.Nb1-c3 (-17,43&#41;
&#91;3&#93; 3.e2-e4 (-17,43&#41;
11 15 41 34979  d2-d4 d7-d5 Qd1-d3 Nb8-c6 Ng1-f3 Ng8-f6 Bc1-f4 Nf6-e4 Nb1-c3 Ne4xc3 Qd3xc3 Bc8-f5
&#91;2&#93; 4.d2-d3 &#40;15,16&#41;
&#91;2&#93; 5.Ng1-f3 &#40;15,16&#41;
11 16 42 44516  e2-e3 Nb8-c6 Nb1-c3 Ng8-f6 Ng1-f3 e7-e5 Bf1-b5 e5-e4 Nf3-d4 Bf8-c5 Nd4-f5
&#91;0&#93; 6.c2-c4 &#40;16,17&#41;
&#91;0&#93; 7.Nb1-a3 &#40;16,17&#41;
&#91;0&#93; 8.Ng1-h3 &#40;16,17&#41;
&#91;0&#93; 9.f2-f4 &#40;16,17&#41;
&#91;0&#93; 10.g2-g3 &#40;16,17&#41;
&#91;0&#93; 11.c2-c3 &#40;16,17&#41;
&#91;0&#93; 12.h2-h4 &#40;16,17&#41;
&#91;0&#93; 13.a2-a4 &#40;16,17&#41;
&#91;0&#93; 14.b2-b3 &#40;16,17&#41;
&#91;0&#93; 15.b2-b4 &#40;16,17&#41;
&#91;0&#93; 16.a2-a3 &#40;16,17&#41;
&#91;0&#93; 17.h2-h3 &#40;16,17&#41;
&#91;0&#93; 18.f2-f3 &#40;16,17&#41;
&#91;0&#93; 19.g2-g4 &#40;16,17&#41;
11 32 48 101472  Nb1-c3 d7-d5 d2-d4 Ng8-f6 Bc1-f4 Bc8-f5 Ng1-f3 Nb8-c6 e2-e3 Nf6-h5 Bf4-g5
11 20 49 112068  e2-e4 Nb8-c6 Ng1-f3 Ng8-f6 Bf1-d3 d7-d5 e4xd5 Nc6-b4 Bd3-b5 Bc8-d7 Bb5xd7 Qd8xd7 Ke1-g1 Nb4xd5
11 32 52 144061  Ng1-f3 Nb8-c6 d2-d4 Ng8-f6 Nb1-c3 d7-d5 Bc1-f4 Bc8-f5 e2-e3 Nf6-h5 Bf4-g5
&#91;0&#93; 0.Nb1-c3 &#40;22,42&#41;
&#91;3&#93; 2.e2-e3 &#40;22,42&#41;
&#91;2&#93; 1.e2-e4 &#40;22,42&#41;
&#91;1&#93; 3.Ng1-f3 &#40;22,42&#41;
&#91;1&#93; 4.d2-d4 &#40;22,23&#41;
&#91;1&#93; 5.d2-d3 &#40;22,23&#41;
&#91;1&#93; 6.c2-c4 &#40;22,23&#41;
&#91;1&#93; 7.Nb1-a3 &#40;22,23&#41;
12 22 53 18766  Nb1-c3
&#91;1&#93; 8.Ng1-h3 &#40;22,23&#41;
&#91;1&#93; 9.g2-g3 &#40;22,23&#41;
&#91;1&#93; 10.h2-h4 &#40;22,23&#41;
&#91;1&#93; 11.f2-f4 &#40;22,23&#41;
&#91;1&#93; 12.a2-a4 &#40;22,23&#41;
&#91;1&#93; 13.b2-b3 &#40;22,23&#41;
&#91;1&#93; 14.a2-a3 &#40;22,23&#41;
&#91;1&#93; 15.c2-c3 &#40;22,23&#41;
&#91;1&#93; 16.f2-f3 &#40;22,23&#41;
&#91;1&#93; 17.h2-h3 &#40;22,23&#41;
&#91;1&#93; 18.b2-b4 &#40;22,23&#41;
&#91;1&#93; 19.g2-g4 &#40;22,23&#41;
&#91;0&#93; 0.Nb1-c3 (-18,42&#41;
&#91;1&#93; 2.e2-e3 (-18,42&#41;
&#91;2&#93; 1.e2-e4 (-18,42&#41;
&#91;3&#93; 3.Ng1-f3 (-18,42&#41;
12 20 62 63073  Ng1-f3 Nb8-c6 d2-d4 d7-d5 e2-e3 Bc8-g4 Nb1-c3 Qd8-d6 Bf1-b5 Ke8-c8 Bb5xc6 Qd6xc6 Ke1-g1
&#91;3&#93; 4.d2-d4 &#40;20,21&#41;
&#91;3&#93; 5.d2-d3 &#40;20,21&#41;
&#91;3&#93; 6.Nb1-a3 &#40;20,21&#41;
&#91;3&#93; 7.c2-c4 &#40;20,21&#41;
&#91;3&#93; 8.Ng1-h3 &#40;20,21&#41;
&#91;3&#93; 9.f2-f4 &#40;20,21&#41;
&#91;3&#93; 10.h2-h4 &#40;20,21&#41;
&#91;3&#93; 11.g2-g3 &#40;20,21&#41;
&#91;3&#93; 12.a2-a4 &#40;20,21&#41;
12 13 63 69309  e2-e3 Nb8-c6 d2-d4 Ng8-f6 Ng1-f3 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Nb1-c3 Nf6-g4
&#91;1&#93; 13.b2-b3 &#40;20,21&#41;
&#91;1&#93; 14.a2-a3 &#40;20,21&#41;
&#91;1&#93; 15.c2-c3 &#40;20,21&#41;
&#91;1&#93; 16.f2-f3 &#40;20,21&#41;
&#91;1&#93; 17.h2-h3 &#40;20,21&#41;
&#91;1&#93; 18.g2-g4 &#40;20,21&#41;
&#91;1&#93; 19.b2-b4 &#40;20,21&#41;
12 11 64 85261  Nb1-c3 d7-d5 d2-d4 Nb8-c6 Ng1-f3 Bc8-f5 Nf3-h4 Bf5-c8 h2-h3 e7-e5 Nh4-f3 e5-e4
12 10 72 188237  e2-e4 Nb8-c6 Ng1-f3 Ng8-f6 Nb1-c3 d7-d5 e4xd5 Nf6xd5 Bf1-c4 Bc8-e6 Bc4xd5 Be6xd5 Ke1-g1 e7-e5 Nc3xd5 Qd8xd5
&#91;0&#93; 0.Ng1-f3 &#40;10,30&#41;
&#91;1&#93; 3.e2-e3 &#40;10,30&#41;
&#91;3&#93; 2.Nb1-c3 &#40;10,30&#41;
&#91;2&#93; 1.e2-e4 &#40;10,30&#41;
13 21 88 197885  Nb1-c3 d7-d5 d2-d4 Nb8-c6 e2-e3 Ng8-f6 Ng1-f3 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Bc1-d2
&#91;3&#93; 4.d2-d4 &#40;21,22&#41;
&#91;3&#93; 5.d2-d3 &#40;21,22&#41;
&#91;3&#93; 6.Nb1-a3 &#40;21,22&#41;
&#91;3&#93; 7.c2-c4 &#40;21,22&#41;
&#91;3&#93; 8.Ng1-h3 &#40;21,22&#41;
&#91;3&#93; 9.a2-a4 &#40;21,22&#41;
&#91;3&#93; 10.f2-f4 &#40;21,22&#41;
&#91;3&#93; 11.h2-h4 &#40;21,22&#41;
&#91;3&#93; 12.g2-g3 &#40;21,22&#41;
&#91;3&#93; 13.b2-b3 &#40;21,22&#41;
&#91;3&#93; 14.a2-a3 &#40;21,22&#41;
13 26 97 295482  Ng1-f3 Ng8-f6 e2-e3 Nb8-c6 d2-d4 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Nb1-c3 Nf6-d5 Bc1-d2
&#91;0&#93; 15.c2-c3 &#40;26,27&#41;
&#91;3&#93; 16.f2-f3 &#40;26,27&#41;
&#91;0&#93; 17.g2-g4 &#40;26,27&#41;
&#91;3&#93; 18.b2-b4 &#40;26,27&#41;
&#91;0&#93; 19.h2-h3 &#40;26,27&#41;
13 26 98 328853  e2-e3 Nb8-c6 d2-d4 Ng8-f6 Ng1-f3 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Nb1-c3 Nf6-d5 Bc1-d2
13 30 100 325129  e2-e4
&#91;0&#93; 0.e2-e4 &#40;10,70&#41;
&#91;2&#93; 1.e2-e3 &#40;10,70&#41;
&#91;3&#93; 2.Ng1-f3 &#40;10,70&#41;
&#91;1&#93; 3.Nb1-c3 &#40;10,70&#41;
13 26 100 2242  e2-e3 Nb8-c6 d2-d4 Ng8-f6 Ng1-f3 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Nb1-c3 Nf6-d5 Bc1-d2
&#91;2&#93; 4.d2-d4 &#40;26,27&#41;
&#91;2&#93; 5.d2-d3 &#40;26,27&#41;
&#91;2&#93; 6.c2-c4 &#40;26,27&#41;
&#91;2&#93; 7.Nb1-a3 &#40;26,27&#41;
&#91;2&#93; 8.Ng1-h3 &#40;26,27&#41;
&#91;2&#93; 9.a2-a4 &#40;26,27&#41;
&#91;2&#93; 10.b2-b3 &#40;26,27&#41;
&#91;2&#93; 11.h2-h4 &#40;26,27&#41;
&#91;2&#93; 12.g2-g3 &#40;26,27&#41;
&#91;2&#93; 13.a2-a3 &#40;26,27&#41;
&#91;2&#93; 14.f2-f4 &#40;26,27&#41;
&#91;2&#93; 15.h2-h3 &#40;26,27&#41;
&#91;2&#93; 16.c2-c3 &#40;26,27&#41;
&#91;2&#93; 17.g2-g4 &#40;26,27&#41;
&#91;2&#93; 18.f2-f3 &#40;26,27&#41;
&#91;2&#93; 19.b2-b4 &#40;26,27&#41;
13 26 100 3113  Ng1-f3 Ng8-f6 e2-e3 Nb8-c6 d2-d4 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Nb1-c3 Nf6-d5 Bc1-d2
13 21 100 7117  Nb1-c3 d7-d5 d2-d4 Nb8-c6 e2-e3 Ng8-f6 Ng1-f3 e7-e6 Bf1-d3 Bf8-d6 Ke1-g1 Ke8-g8 Bc1-d2
13 35 158 600331  e2-e4 Ng8-f6 e4-e5 Nf6-d5 Ng1-f3 d7-d6 Bf1-c4 Nd5-b6 Bc4-b5 Bc8-d7 Bb5xd7 Nb8xd7 e5xd6 Nd7-f6 d6-d7 Qd8xd7 Ke1-g1
&#91;0&#93; 0.e2-e4 &#40;25,45&#41;
&#91;2&#93; 3.Nb1-c3 &#40;25,45&#41;
&#91;3&#93; 2.Ng1-f3 &#40;25,45&#41;
&#91;1&#93; 1.e2-e3 &#40;25,45&#41;
&#91;2&#93; 4.d2-d4 &#40;25,26&#41;
&#91;2&#93; 5.d2-d3 &#40;25,26&#41;
&#91;2&#93; 6.c2-c4 &#40;25,26&#41;
&#91;3&#93; 7.Nb1-a3 &#40;25,26&#41;
&#91;2&#93; 8.Ng1-h3 &#40;25,26&#41;
&#91;3&#93; 9.a2-a4 &#40;25,26&#41;
&#91;2&#93; 10.b2-b3 &#40;25,26&#41;
&#91;3&#93; 11.h2-h4 &#40;25,26&#41;
&#91;2&#93; 12.g2-g3 &#40;25,26&#41;
&#91;3&#93; 13.a2-a3 &#40;25,26&#41;
&#91;2&#93; 14.f2-f4 &#40;25,26&#41;
&#91;3&#93; 15.h2-h3 &#40;25,26&#41;
&#91;2&#93; 16.c2-c3 &#40;25,26&#41;
&#91;2&#93; 17.g2-g4 &#40;25,26&#41;
&#91;3&#93; 18.f2-f3 &#40;25,26&#41;
&#91;2&#93; 19.b2-b4 &#40;25,26&#41;
14 43 263 836786  e2-e4 Nb8-c6 Ng1-f3 e7-e6 d2-d4 d7-d5 e4xd5 Qd8xd5 Nb1-c3 Bf8-b4 Bf1-d3 Ng8-f6 Ke1-g1 Bb4xc3 b2xc3 Ke8-g8
&#91;0&#93; 0.e2-e4 &#40;33,53&#41;
&#91;3&#93; 1.e2-e3 &#40;33,53&#41;
&#91;1&#93; 3.Nb1-c3 &#40;33,53&#41;
&#91;2&#93; 2.Ng1-f3 &#40;33,53&#41;
&#91;1&#93; 4.d2-d4 &#40;33,34&#41;
&#91;2&#93; 5.c2-c4 &#40;33,34&#41;
&#91;3&#93; 6.d2-d3 &#40;33,34&#41;
&#91;3&#93; 7.Nb1-a3 &#40;33,34&#41;
&#91;3&#93; 8.Ng1-h3 &#40;33,34&#41;
&#91;3&#93; 9.a2-a4 &#40;33,34&#41;
&#91;3&#93; 10.b2-b3 &#40;33,34&#41;
&#91;3&#93; 11.h2-h4 &#40;33,34&#41;
&#91;3&#93; 12.g2-g3 &#40;33,34&#41;
&#91;3&#93; 13.a2-a3 &#40;33,34&#41;
&#91;3&#93; 14.h2-h3 &#40;33,34&#41;
&#91;3&#93; 15.g2-g4 &#40;33,34&#41;
&#91;3&#93; 16.f2-f4 &#40;33,34&#41;
&#91;3&#93; 17.b2-b4 &#40;33,34&#41;
&#91;3&#93; 18.c2-c3 &#40;33,34&#41;
&#91;3&#93; 19.f2-f3 &#40;33,34&#41;
15 45 360 1144434  e2-e4 Nb8-c6 Ng1-f3 Ng8-f6 d2-d4 Nf6xe4 d4-d5 Nc6-b4 Nb1-c3 f7-f5 a2-a3 Ne4xc3 b2xc3 Nb4-a6 Bf1xa6 b7xa6 Ke1-g1
&#91;0&#93; 0.e2-e4 &#40;35,55&#41;
&#91;2&#93; 1.e2-e3 &#40;35,55&#41;
&#91;1&#93; 2.Ng1-f3 &#40;35,55&#41;
&#91;3&#93; 3.Nb1-c3 &#40;35,55&#41;
&#91;1&#93; 4.d2-d4 &#40;35,36&#41;
&#91;1&#93; 5.c2-c4 &#40;35,36&#41;
&#91;2&#93; 6.d2-d3 &#40;35,36&#41;
&#91;2&#93; 7.Nb1-a3 &#40;35,36&#41;
&#91;2&#93; 8.Ng1-h3 &#40;35,36&#41;
&#91;2&#93; 9.a2-a4 &#40;35,36&#41;
&#91;2&#93; 10.b2-b3 &#40;35,36&#41;
&#91;2&#93; 11.h2-h4 &#40;35,36&#41;
&#91;2&#93; 12.g2-g3 &#40;35,36&#41;
&#91;2&#93; 13.g2-g4 &#40;35,36&#41;
&#91;1&#93; 14.a2-a3 &#40;35,36&#41;
&#91;2&#93; 15.f2-f4 &#40;35,36&#41;
&#91;1&#93; 16.h2-h3 &#40;35,36&#41;
&#91;2&#93; 17.b2-b4 &#40;35,36&#41;
&#91;2&#93; 18.c2-c3 &#40;35,36&#41;
&#91;1&#93; 19.f2-f3 &#40;35,36&#41;
16 35 546 2138911  e2-e4 Nb8-c6 Ng1-f3 Ng8-f6 d2-d4 Nf6xe4 d4-d5 Nc6-b4 Nb1-c3 Ne4xc3 b2xc3 Nb4-a6 Bf1xa6 b7xa6 Ke1-g1 e7-e6 Bc1-f4 Bc8-b7 d5xe6 f7xe6
&#91;0&#93; 0.e2-e4 (-5,55&#41;
&#91;1&#93; 1.e2-e3 (-5,55&#41;
&#91;2&#93; 2.Nb1-c3 (-5,55&#41;
&#91;3&#93; 3.Ng1-f3 (-5,55&#41;
16 0 742 2065676  e2-e3 Ng8-f6 Nb1-c3 e7-e6 Ng1-f3 d7-d5 d2-d4 Bf8-d6 Bf1-d3 Nb8-c6 Ke1-g1 Ke8-g8 Bc1-d2 e6-e5 Nc3-b5 Bc8-g4 Nb5xd6 Qd8xd6
&#91;1&#93; 4.c2-c4 &#40;0,1&#41;
&#91;1&#93; 5.d2-d4 &#40;0,1&#41;
16 36 773 2639989  e2-e4 e7-e5 Ng1-f3 Ng8-f6 Bf1-c4 Nb8-c6 Ke1-g1 Bf8-d6 Nb1-c3 Ke8-g8 Qd1-e2 Bd6-c5 d2-d3 d7-d6 Bc1-e3 Bc8-g4 Be3xc5 d6xc5
&#91;0&#93; 6.d2-d3 &#40;36,37&#41;
&#91;0&#93; 7.Nb1-a3 &#40;36,37&#41;
&#91;0&#93; 8.f2-f4 &#40;36,37&#41;
&#91;0&#93; 9.h2-h3 &#40;36,37&#41;
&#91;0&#93; 10.h2-h4 &#40;36,37&#41;
&#91;0&#93; 11.Ng1-h3 &#40;36,37&#41;
&#91;0&#93; 12.b2-b3 &#40;36,37&#41;
&#91;0&#93; 13.a2-a4 &#40;36,37&#41;
&#91;0&#93; 14.a2-a3 &#40;36,37&#41;
&#91;0&#93; 15.g2-g3 &#40;36,37&#41;
&#91;0&#93; 16.g2-g4 &#40;36,37&#41;
&#91;0&#93; 17.b2-b4 &#40;36,37&#41;
&#91;0&#93; 18.c2-c3 &#40;36,37&#41;
&#91;0&#93; 19.f2-f3 &#40;36,37&#41;
16 17 799 3021568  Ng1-f3 Ng8-f6 d2-d4 d7-d5 Nb1-c3 e7-e6 Bc1-g5 Bf8-b4 e2-e3 Ke8-g8 Bf1-d3 h7-h6 Bg5xf6 Qd8xf6 Ke1-g1 Bb4xc3 b2xc3 Nb8-c6
16 5 858 3598962  Nb1-c3 Ng8-f6 e2-e4 e7-e5 Ng1-f3 Nb8-c6 Bf1-c4 Bf8-c5 Qd1-e2 Ke8-g8 Ke1-g1 d7-d6 d2-d3 Qd8-e7 Nc3-d5 Nf6xd5 Bc4xd5
16 21 1103 6377622  d2-d4 Ng8-f6 Nb1-c3 d7-d5 Ng1-f3 e7-e6 Bc1-g5 Bf8-b4 Qd1-d3 h7-h6 Bg5-f4 Ke8-g8 Ke1-c1 Nf6-g4 Kc1-b1 Ng4xf2
&#91;0&#93; 0.e2-e4 &#40;26,46&#41;
&#91;2&#93; 3.Ng1-f3 &#40;26,46&#41;
&#91;3&#93; 2.d2-d4 &#40;26,46&#41;
&#91;1&#93; 1.Nb1-c3 &#40;26,46&#41;
&#91;2&#93; 4.e2-e3 &#40;26,27&#41;
&#91;2&#93; 5.c2-c4 &#40;26,27&#41;
&#91;2&#93; 6.d2-d3 &#40;26,27&#41;
&#91;2&#93; 7.Nb1-a3 &#40;26,27&#41;
&#91;2&#93; 8.f2-f4 &#40;26,27&#41;
&#91;1&#93; 9.h2-h3 &#40;26,27&#41;
&#91;2&#93; 10.h2-h4 &#40;26,27&#41;
&#91;1&#93; 11.Ng1-h3 &#40;26,27&#41;
&#91;1&#93; 12.b2-b3 &#40;26,27&#41;
&#91;2&#93; 13.a2-a4 &#40;26,27&#41;
&#91;3&#93; 14.a2-a3 &#40;26,27&#41;
&#91;1&#93; 15.g2-g3 &#40;26,27&#41;
&#91;2&#93; 16.g2-g4 &#40;26,27&#41;
&#91;2&#93; 17.b2-b4 &#40;26,27&#41;
&#91;3&#93; 18.c2-c3 &#40;26,27&#41;
&#91;2&#93; 19.f2-f3 &#40;26,27&#41;
17 40 1419 3568245  e2-e4 e7-e5 Ng1-f3 Ng8-f6 Bf1-c4 Nb8-c6 Ke1-g1 Bf8-d6 Nb1-c3 Ke8-g8 Qd1-e2 Bd6-c5 d2-d3 d7-d6 Bc1-e3 Bc5xe3 Qe2xe3 Bc8-e6 Bc4xe6 f7xe6
&#91;0&#93; 0.e2-e4 &#40;30,50&#41;
&#91;2&#93; 1.Nb1-c3 &#40;30,50&#41;
&#91;1&#93; 2.d2-d4 &#40;30,50&#41;
&#91;3&#93; 3.Ng1-f3 &#40;30,50&#41;
&#91;3&#93; 4.e2-e3 &#40;30,31&#41;
&#91;3&#93; 5.c2-c4 &#40;30,31&#41;
&#91;3&#93; 6.d2-d3 &#40;30,31&#41;
&#91;2&#93; 7.f2-f4 &#40;30,31&#41;
&#91;3&#93; 8.h2-h4 &#40;30,31&#41;
&#91;3&#93; 9.h2-h3 &#40;30,31&#41;
&#91;1&#93; 10.a2-a3 &#40;30,31&#41;
&#91;2&#93; 11.Ng1-h3 &#40;30,31&#41;
&#91;1&#93; 12.g2-g3 &#40;30,31&#41;
&#91;2&#93; 13.Nb1-a3 &#40;30,31&#41;
&#91;3&#93; 14.a2-a4 &#40;30,31&#41;
&#91;2&#93; 15.c2-c3 &#40;30,31&#41;
&#91;1&#93; 16.b2-b3 &#40;30,31&#41;
&#91;3&#93; 17.g2-g4 &#40;30,31&#41;
&#91;2&#93; 18.b2-b4 &#40;30,31&#41;
&#91;2&#93; 19.f2-f3 &#40;30,31&#41;
exit
nodes = 42123183 <59 qnodes> time = 19514ms nps = 2158613
splits = 23 badsplits = 4 egbb_probes = 0
quit

dchoman
Posts: 171
Joined: Wed Dec 28, 2011 7:44 pm
Location: United States

Re: Depth = 20 Results

Post by dchoman » Sat Mar 23, 2013 8:23 am

Actually, the implementation I am using is much simpler than you are describing. The *only* communication between threads is through the various hash tables and a flag to tell the threads when to stop.

So when I start a new iteration, I send the same alpha beta limits (+/- 15 from the previous iteration score) and same root move list to each thread. The threads each search independently using a normal PVS algorithm from that point onward until one of the threads either fails high or has searched all the root moves. When one of the threads is finished, it sets the flag that tells all threads to stop searching. Only the results of the thread that has finished first are used. If the thread that finished first happens to be one that was searching at iteration depth + 1, the iteration depth is increased by 1.

You might wonder what happens in the case of a fail high or fail low. The driver that handles researches at the root is what is calling the smp function that launches the threads, so a fail high/low will simply generate a new search on all threads with new alpha/beta limits... again, identical between all threads.

In this very simple approach, there is no natural way to do YBW at the root -- unless I were to introduce some additional flags for it... that might work. In the few posts where I've mentioned having the different threads search different moves, I have only been talking about the order in which the moves are searched.... and my modifications have been very simple: even/odd moves first in even/odd threads, but all threads still search all moves in the end. One could implement a stack, as you suggested in one of those earlier threads and have the threads divide up the work somewhat more intelligently, but I have not tried such a thing yet.

Again, I want to say that I know a full YBW algorithm that splits throughout the tree is certainly better than this shared smp approach, but if one wants to try SMP out (as I did), this is much easier to get started with for at least some reasonable gain on a few processors.

- Dan

P.S. For timed searches, time expiring is the third possible reason to terminate all threads. In this case, I do a quick look through the results of each thread to see if any of them changed from the first move to another preferred move... if so, that move is used instead. These cases are very rare, however.

Post Reply