Core behaviour
Posted: Wed Jun 28, 2017 12:16 pm
I became a bit concerned after noticing a (very) strange result during an eng-eng match and decided to dive into it, wrote some statistic stuff and started to play a match between 2 equal ProDeo engines on my 8-core Intel Xeon, Windows 7 Pro, 16.000 games (8 x 2000) at 40m/15s. During the match (with cute) I can take snapshots any time, here is one.
There are some strange things going on here, first of all there have been already 3000 games played and the depth stats have been long settled, yet version 220 has an average middlegame depth of 10.66 while version 240 has only 10.61. A difference of 0.05 may not look much but by experience I know it is a big deal.
So why is this difference coming from and the measuring of each process seperately gives a hint. Let's condider the 8th entry of each match.
c:\cc\240-8\param.txt - Time used : 3:22:04 [MIDG depth = 10.52)
c:\cc\220-8\param.txt - Time used : 3:21:19 [MIDG depth = 10.74)
That's a difference of 0.22.
So how does Windows divide the 8 matches over the 8 cores when starting cute? Do some cores byte each other while others remain hardly unused? The following screenshot hints ro that, note the load percentages of the 8 cores.
On this level the typical NPS = 1.9M but I also had a case the NPS of 240 was 2.1M, 200,000 per sec more without any reasonable explanation.
So, what's going on?
[A] quit computer chess dummy.
Code: Select all
c:\cc\240-1\param.txt - Time used : 3:21:54 [MIDG depth = 10.72)
c:\cc\240-2\param.txt - Time used : 3:21:56 [MIDG depth = 10.53)
c:\cc\240-3\param.txt - Time used : 3:21:48 [MIDG depth = 10.71)
c:\cc\240-4\param.txt - Time used : 3:22:01 [MIDG depth = 10.59)
c:\cc\240-5\param.txt - Time used : 3:21:57 [MIDG depth = 10.64)
c:\cc\240-6\param.txt - Time used : 3:21:53 [MIDG depth = 10.49)
c:\cc\240-7\param.txt - Time used : 3:21:37 [MIDG depth = 10.71)
c:\cc\240-8\param.txt - Time used : 3:22:04 [MIDG depth = 10.52)
c:\cc\220-1\param.txt - Time used : 3:22:06 [MIDG depth = 10.75)
c:\cc\220-2\param.txt - Time used : 3:21:32 [MIDG depth = 10.57)
c:\cc\220-3\param.txt - Time used : 3:21:51 [MIDG depth = 10.74)
c:\cc\220-4\param.txt - Time used : 3:21:33 [MIDG depth = 10.77)
c:\cc\220-5\param.txt - Time used : 3:22:02 [MIDG depth = 10.70)
c:\cc\220-6\param.txt - Time used : 3:21:39 [MIDG depth = 10.50)
c:\cc\220-7\param.txt - Time used : 3:22:14 [MIDG depth = 10.55)
c:\cc\220-8\param.txt - Time used : 3:21:19 [MIDG depth = 10.74)
240 26:55:10 (187.192M nodes) NPS = 1.932K
220 26:54:16 (191.253M nodes) NPS = 1.975K
Depth Stats MIDG END0 END1 END2
240 10.61 11.25 12.11 14.98
220 10.66 11.30 12.13 15.03
So why is this difference coming from and the measuring of each process seperately gives a hint. Let's condider the 8th entry of each match.
c:\cc\240-8\param.txt - Time used : 3:22:04 [MIDG depth = 10.52)
c:\cc\220-8\param.txt - Time used : 3:21:19 [MIDG depth = 10.74)
That's a difference of 0.22.
So how does Windows divide the 8 matches over the 8 cores when starting cute? Do some cores byte each other while others remain hardly unused? The following screenshot hints ro that, note the load percentages of the 8 cores.
On this level the typical NPS = 1.9M but I also had a case the NPS of 240 was 2.1M, 200,000 per sec more without any reasonable explanation.
So, what's going on?
[A] quit computer chess dummy.