I'd therefore like to propose such a test.
Running on a single core, take the start position, and run the engine on it for about one minute. Here are the results on an i5-2500K (Sandy Bridge).
Houdini 15a x64 1-core:
21/44 01:01 151,909,342 2,481,000 +0.17 e2-e4 e7-e5
Houdini 1.5a x64 (512MB hash) achieves a speed of 2,490,317 NPS average (151 Million divided by 61 seconds)
Rybka 4.1 SSE42 x64:
18 01:22 12,214,461 148,739 +0.22 e2-e4 e7-e5
Rybka 4.1SSE42 x64 reaches 148,956 NPS average.
Critter 1.2 64bit SSE4:
21/43 01:04 153,833,934 2,394,712 +0.12 d2-d4 Ng8-f6
Critter 1.2 64bit SSE4 reaches 2,403,655 NPS average.
Stockfish-211-64-ja:
25/33 01:08 106,008,326 1,558,693 +0.28 e2-e4 e7-e6
Stockfish-211-64-ja reaches 1,558,946 average
Komodo 3 x64 SSE4:
22 01:28 133,246,706 1,502,912 +0.28 d2-d4 d7-d5
Komodo 3 x64 SSE4 reaches 1,624,960 average.
Now setting Houdini at a parameter of 1000 based on its score, the other engines receive relative NPS scores.
i5-2500K
Code: Select all
Houdini 15a x64 1000
Critter 1.2 64bit SSE4 965.2
Komodo 3 x64 SSE4 652.5
Stockfish 2.1.1 x64 626.0
Rybka 4.1 SSE42 x64 59.81
Remember, the score of Houdini is merely a measurement of comparison, and will always be 1000. If Critter suddenly gets a score of 921 on a Phenom, that means that Houdini gains a special edge on that platform compared to Critter.