Code: Select all
c:\cc\240-1\param.txt - Time used : 3:21:54 [MIDG depth = 10.72)
c:\cc\240-2\param.txt - Time used : 3:21:56 [MIDG depth = 10.53)
c:\cc\240-3\param.txt - Time used : 3:21:48 [MIDG depth = 10.71)
c:\cc\240-4\param.txt - Time used : 3:22:01 [MIDG depth = 10.59)
c:\cc\240-5\param.txt - Time used : 3:21:57 [MIDG depth = 10.64)
c:\cc\240-6\param.txt - Time used : 3:21:53 [MIDG depth = 10.49)
c:\cc\240-7\param.txt - Time used : 3:21:37 [MIDG depth = 10.71)
c:\cc\240-8\param.txt - Time used : 3:22:04 [MIDG depth = 10.52)
c:\cc\220-1\param.txt - Time used : 3:22:06 [MIDG depth = 10.75)
c:\cc\220-2\param.txt - Time used : 3:21:32 [MIDG depth = 10.57)
c:\cc\220-3\param.txt - Time used : 3:21:51 [MIDG depth = 10.74)
c:\cc\220-4\param.txt - Time used : 3:21:33 [MIDG depth = 10.77)
c:\cc\220-5\param.txt - Time used : 3:22:02 [MIDG depth = 10.70)
c:\cc\220-6\param.txt - Time used : 3:21:39 [MIDG depth = 10.50)
c:\cc\220-7\param.txt - Time used : 3:22:14 [MIDG depth = 10.55)
c:\cc\220-8\param.txt - Time used : 3:21:19 [MIDG depth = 10.74)
240 26:55:10 (187.192M nodes) NPS = 1.932K
220 26:54:16 (191.253M nodes) NPS = 1.975K
Depth Stats MIDG END0 END1 END2
240 10.61 11.25 12.11 14.98
220 10.66 11.30 12.13 15.03
So why is this difference coming from and the measuring of each process seperately gives a hint. Let's condider the 8th entry of each match.
c:\cc\240-8\param.txt - Time used : 3:22:04 [MIDG depth = 10.52)
c:\cc\220-8\param.txt - Time used : 3:21:19 [MIDG depth = 10.74)
That's a difference of 0.22.
So how does Windows divide the 8 matches over the 8 cores when starting cute? Do some cores byte each other while others remain hardly unused? The following screenshot hints ro that, note the load percentages of the 8 cores.
On this level the typical NPS = 1.9M but I also had a case the NPS of 240 was 2.1M, 200,000 per sec more without any reasonable explanation.
So, what's going on?
[A] quit computer chess dummy.