For me and Thomas Zipproth (Brainfish / Cerebellum) it was a huge surprise, that the results, when using SALC openings, is not distorted, when you compare it with an "normal" openingset like the HERT-set. I posted the results on my website and in the SALC V5 readme-file, but I think I should post it here again, because this is very important (it proofs, that SALC can be used for tournaments or rating-lists without distorting the results (!). And using SALC instead of any "normal" openings-set, reduces the number of draws a lot and the average game-length around 11% (!))BrendanJNorman wrote:This is an immutable law of chess strategy, not even necessarily related to computer chess.Dann Corbit wrote:Thank you. I learn important lessons from all of your books.
I also think that short verses long castling is a very important theme.
In addition, games can become really exciting when opposite castles occur.
It's one of the first dynamics I teach and train my students in, actually.
Anyway, I love these types of "theme-based" books, and will share my thoughts more on this one when I finish work.
Using the new HERT openings-set (by Thomas Zipproth) for my Stockfish-testing was a great opportunity to compare the gamebases played with HERT (contains positions selected from the most played variations in Engine and Human tournaments) and played with my SALC openings So, here the results. Both gamebases were played with 3'+1'', singlecore, 512 MB Hash. The only difference was the opening-set (HERT / SALC)... 2x 15000 games (!)
HERT:
Code: Select all
Program Elo + - Games Score Av.Op. Draws
1 Stockfish 170526 bmi2 : 3346 7 7 5000 71.3 % 3171 45.6 %
2 Komodo 11.2.2 x64 : 3314 6 6 5000 66.9 % 3177 45.8 %
3 Houdini 5 pext : 3299 6 6 5000 64.7 % 3180 48.5 %
4 Shredder 13 x64 : 3119 6 6 5000 37.8 % 3216 43.7 %
5 Fizbo 1.9 bmi2 : 3096 6 6 5000 34.4 % 3221 38.2 %
6 Andscacs 0.91b bmi2 : 3026 7 7 5000 24.9 % 3235 34.9 %
1-6: 320 (overall)
1-2: 32
2-3: 15
3-4: 180
4-5: 23
5-6: 70
Games: 15000 (finished)
average game length: +13.7% compared to SALC games (moves), +10% compared to SALC games (time)
White Wins: 5129 (34.2 %)
Black Wins: 3455 (23.0 %)
Draws: 6416 (42.8 %)
SALC V3:
Code: Select all
Program Elo + - Games Score Av.Op. Draws
1 Stockfish 170526 bmi2 : 3359 7 7 5000 72.7 % 3168 39.9 %
2 Komodo 11.2.2 x64 : 3327 7 7 5000 68.3 % 3175 38.5 %
3 Houdini 5 pext : 3298 6 6 5000 64.4 % 3180 42.2 %
4 Shredder 13 x64 : 3108 6 6 5000 36.4 % 3218 35.4 %
5 Fizbo 1.9 bmi2 : 3097 7 7 5000 34.8 % 3221 31.1 %
6 Andscacs 0.91b bmi2 : 3012 7 7 5000 23.5 % 3238 27.7 %
1-6: 347 (overall)
1-2: 32
2-3: 29
3-4: 190
4-5: 11
5-6: 85
Games: 15000 (finished)
White Wins: 5476 (36.5 %)
Black Wins: 4154 (27.7 %)
Draws: 5370 (35.8 %)
Conclusions:
1) SALC lowers the draw-rate a lot (35.8%) , compared to the HERT openings-set (42.8%) - mention, that the HERT-set was optimized for a low draw-rate. Thomas Zipproth has chosen only lines, which were not too drawish. Using other "classical" openings-sets should lead to a higher draw-rate, than using HERT.
2) The order of rank is the same for all engines in both gamebases = no distorted results playing SALC.
3) The scores of the engines are not getting closer to 50%, using SALC. The Elo-differences are not getting smaller (in fact, they are getting higher! (Elo-differences rank 1 to 6: 320 Elo using HERT, but 347 Elo using SALC), which proofs, that SALC does not contain a lot of lines, which are leading to a clear advantage (and easy wins) for white or black. And bigger Elo-differences make the results statistical more reliable.
4) SALC lowers the average game duration around 10%. That means, that in the same time, +10% more games can be played, which leads to statistical more valuable results in the same time.