ClassicAra Chess Engine..World Record Download!!

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

IQ_QI
Posts: 25
Joined: Wed Dec 05, 2018 8:51 pm
Full name: Johannes Czech

Re: ClassicAra Chess Engine..World Record Download!!

Post by IQ_QI »

Hello together,
I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
Gabor Szots wrote: Thu May 20, 2021 9:32 pm
Sylwy wrote: Thu May 20, 2021 9:17 pmThe UCI settings on the Arena GUI cannot affect the internal architecture of this engine. If necessary it uses 1-2 or 3 threads.
Too bad. Then it's not suitable for 1-CPU testing.
The option Threads currently describes the number of search threads which allocate the mini-batches.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.

There are also some threads running by default.
  • A main thread which handles user input commands over stdin.
  • A thread manager which logs the current best move to the console every 1s, stops the search threads when the stop command is given and handles the time management.
  • A garbage collector thread which asynchronously frees the memory from the previous search during the current search.
It is possible to use only a single thread for CPU based neural network inference.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
When I tried using int8 precision for ClassicAra 0.9.0 on Windows and Mac for the CPU version, it crashed.
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.

I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.
User avatar
AdminX
Posts: 6340
Joined: Mon Mar 13, 2006 2:34 pm
Location: Acworth, GA

Re: ClassicAra Chess Engine..World Record Download!!

Post by AdminX »

IQ_QI wrote: Thu May 20, 2021 11:11 pm Hello together,
I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
Gabor Szots wrote: Thu May 20, 2021 9:32 pm
Sylwy wrote: Thu May 20, 2021 9:17 pmThe UCI settings on the Arena GUI cannot affect the internal architecture of this engine. If necessary it uses 1-2 or 3 threads.
Too bad. Then it's not suitable for 1-CPU testing.
The option Threads currently describes the number of search threads which allocate the mini-batches.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.

There are also some threads running by default.
  • A main thread which handles user input commands over stdin.
  • A thread manager which logs the current best move to the console every 1s, stops the search threads when the stop command is given and handles the time management.
  • A garbage collector thread which asynchronously frees the memory from the previous search during the current search.
It is possible to use only a single thread for CPU based neural network inference.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
When I tried using int8 precision for ClassicAra 0.9.0 on Windows and Mac for the CPU version, it crashed.
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.

I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.
Thank you for such an interesting engine! :D
"Good decisions come from experience, and experience comes from bad decisions."
__________________________________________________________________
Ted Summers
User avatar
AdminX
Posts: 6340
Joined: Mon Mar 13, 2006 2:34 pm
Location: Acworth, GA

Re: ClassicAra Chess Engine..World Record Download!!

Post by AdminX »

[pgn]
[Event "banksia game"]
[Date "2021.05.20"]
[White "ClassicAra 0.9.0"]
[Black "Remote Hiarcs 14"]
[Result "1-0"]
[TimeControl "40/1800+1"]
[Time "18:18:32"]
[Termination "timeout"]
[ECO "E10"]
[Opening "Queen's pawn game"]

1. d4 {+0.56/31 48721 480841} Nf6 2. c4 {+0.58/28 48303 595078} e6
3. Nf3 {+0.68/24 48431 661344; E10: Queen's pawn game} c6 4. Bf4 {+0.93/25 21853 216120} d5
5. e3 {+1.17/28 48961 508706} Bd6 6. Bg3 {+1.11/48 145909 1582917} O-O {+0.42/19 91025 81657689}
7. Nc3 {+1.09/45 91544 1932958} b6 8. Rc1 {+1.09/43 1356 1626862} Ba6 {+0.30/20 60752 54940732}
9. Ne5 {+1.27/27 15947 168923} c5 {+0.34/19 85287 76072771} 10. cxd5 {+1.08/64 138785 1279906} Bxf1 {+0.17/18 66973 61939011}
11. dxe6 {+1.11/62 343 1195670} Bxg2 {+0.17/20 70195 68685409} 12. Rg1 {+1.15/60 524 1040032} Bc6 {+0.03/20 68065 64076988}
13. Nxf7 {+1.16/60 46120 926004} Rxf7 {+0.14/20 161 47516671} 14. exf7+ {+1.21/42 46034 1291265} Kxf7 {+0.12/21 136 47697148}
15. Be5 {+1.33/34 3350 255154} Bxe5 {+0.00/20 94818 93643265} 16. dxe5 {+1.49/61 47781 528691} Qxd1+ {-0.01/23 96 54665392}
17. Rxd1 {+1.40/60 94765 1313076} Ng8 {+0.00/23 68 113082748} 18. f4 {+1.49/51 46287 420647} Na6 {+0.00/23 65780 75775004}
19. Rd6 {+0.99/48 135053 1283169} Ne7 {+0.00/23 137 158896339} 20. e4 {+1.52/54 41045 498096} Nb4 {-0.15/21 140 48705563}
21. h4 {+1.93/39 16723 177249} c4 {-0.22/20 88041 98982746} 22. Kd2 {+1.07/49 83254 813430} Nd3 {-0.02/21 150 94949554}
23. Ke3 {+0.80/56 116707 1312588} b5 {-0.14/21 75129 87869312} 24. h5 {+0.86/54 1083 1205314} Rc8 {+0.00/21 147316 173382646}
25. h6 {+2.65/55 36823 376723} g6 {-0.18/20 52717 62335741} 26. Rf6+ {+2.16/61 71974 1030205} Kg8 {-0.20/21 65358 76987938}
27. e6 {+2.17/60 779 925190} b4 {+0.00/22 111068 142197279} 28. Ne2 {+1.94/62 71300 1170934} Nc5 {+0.23/22 152706 191540018}
29. Rf7 {+1.91/62 64553 1614751} Re8 {+0.44/22 63 82451474} 30. f5 {+1.89/57 57996 2033349} Nxe4 {+0.47/20 102 73316531}
31. Kf4 {+1.90/55 349 1993356} Nc5 {+0.58/19 65685 84181325} 32. Nd4 {+6.25/42 29914 298837} Ba8 {+0.69/19 58387 118214448}
33. Ke5 {+6.62/39 7584 89997} Rd8 {+2.51/21 192715 282064662} 34. Rg4 {+6.70/40 3690 106156} Nc8 {+1.26/19 65143 80169873}
35. fxg6 {+5.74/31 16042 167561} hxg6 {+1.50/18 6902 29030161} 36. Kf6 {+6.28/31 7431 154394} Kh8 {+1.26/19 27198 32966406}
37. Rc7 {+6.72/31 6351 163466} Ne4+ {+2.17/20 57621 73203064} 38. Kxg6 {+7.13/29 7644 208410} Re8 {+4.11/21 30182 49888540}
39. Rh7+ {+9.09/46 88269 941138} Kg8 {+5.01/24 155 123097117} 40. Rf7 {+9.19/44 289 926356} Ned6 {+2.54/23 33689 46393114}
41. h7+ {+9.73/42 350 813863} Kh8 {+2.54/21 106 497793} 42. Rgf4 {+9.96/40 391 607929} Be4+ {+3.15/27 58598 99367994}
43. Kh6 {+10.19/38 17671 748623} Bf5 {+3.07/27 34763 90299893} 44. R4xf5 {+10.32/36 294 717699} Nxf5+ {+4.50/27 185677 342430867}
45. Rxf5 {+10.36/34 285 643926} c3 {+4.17/27 101257 184973032} 46. bxc3 {+10.41/37 44362 776697} bxc3 {+5.94/28 174910 432309678}
47. Rf3 {+10.54/35 294 740371} c2 {+6.32/28 205988 434259179} 48. Rc3 {+10.49/33 621 315553} Nd6 {+6.30/27 41052 85455107}
49. Rxc2 {+10.43/30 58746 799615} Nb5 {+6.50/27 148963 449004116} 50. Nxb5 {+11.46/26 61445 728327} Rxe6+ {+6.50/28 156 142890193}
51. Kg5 {+11.55/24 10000 812690} a5 {+6.50/24 81773 195098472} 52. Nd4 {+12.07/22 21499 245960} Re4 {+6.10/24 22006 52312755}
53. Nf5 {+11.71/21 64102 772983} Re6 {M-28/25 252946 552787940} 54. Rh2 {+12.23/21 29468 330813} a4 {M-22/23 124 58515232}
55. Kf4 {+12.28/29 24307 391568} a3 {M-32/22 10761 21941860} 56. Nd4 {+12.46/28 66777 839643} Rf6+ {M-31/22 19501 39026884}
57. Ke5 {+12.88/31 66617 751530} Rf7 {M-29/17 720 911255} 58. Nb5 {+14.30/20 19203 251542} Re7+ {M-29/21 8436 17361747}
59. Kd6 {+15.34/24 68744 703788} Re4 {M-33/17 605 796066} 60. Nxa3 {+16.64/26 68474 646057} Rd4+ {M-52/20 5457 11190217}
61. Kc5 {+17.57/27 68183 671051} Re4 {M-34/18 626 724154} 62. Nc4 {+18.86/27 68212 630101} Re8 {M-19/26 122 141787677}
63. Nd6 {+19.14/20 67942 705578} Rf8 {M-30/18 800 994387} 64. a4 {+19.43/35 68065 732932} Rf5+ {M-20/19 2304 4299099}
65. Kb6 {+19.63/23 68017 684301} Rf6 {M-14/16 735 893121} 66. Kc7 {+19.61/27 67436 763529} Rf8 {M-11/17 802 1351150}
67. a5 {+18.38/20 66853 747163} Rf5 {M-10/35 76 158363093} 68. Nb7 {+19.45/25 46076 500613} Rf7+ {M-34/13 515 677891}
69. Kc6 {+18.90/29 68302 756292} Re7 {M-12/31 141 141570091} 70. Rh4 {+19.64/20 60945 658611} Re2 {M-13/16 499 601740}
71. Nd6 {+19.57/18 51021 538665} Re7 {M-11/33 175 108757619} 72. Nb7 {+19.48/18 38210 441416} Rf7 {M-21/15 468 569583}
73. Kb6 {+19.44/15 48032 517084} Rf6+ {M-23/18 512 609182} 74. Ka7 {+18.89/29 76467 854621} Rf5 {M-24/17 559 568357}
75. a6 {+18.20/18 75729 820411} Rf6 {M-22/17 1921 3345887} 76. Nc5 {+18.29/18 59360 687426} Re6 {M-22/15 1283 1274873}
77. Nd3 {+18.67/22 62450 671249} Re8 {M-15/18 3692 6951953} 78. Nb4 {+18.61/17 49282 537875} Rf8 {M-20/15 717 961893}
79. Nc6 {+18.96/17 97545 1035298} Kg7 {M-12/16 605 896734} 1-0
[/pgn]
"Good decisions come from experience, and experience comes from bad decisions."
__________________________________________________________________
Ted Summers
Gabor Szots
Posts: 1364
Joined: Sat Jul 21, 2018 7:43 am
Location: Szentendre, Hungary
Full name: Gabor Szots

Re: ClassicAra Chess Engine..World Record Download!!

Post by Gabor Szots »

IQ_QI wrote: Thu May 20, 2021 11:11 pmIt is possible to use only a single thread for CPU based neural network inference.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1.
Thank you Johannes. That works!
Gabor Szots
CCRL testing group
Gabor Szots
Posts: 1364
Joined: Sat Jul 21, 2018 7:43 am
Location: Szentendre, Hungary
Full name: Gabor Szots

Re: ClassicAra Chess Engine..World Record Download!!

Post by Gabor Szots »

Sylwy wrote: Thu May 20, 2021 9:00 pm 1.-to built MCGS the engines uses by default (internally) 1 to 3 threads:

Image

2.-the RISEv3.3 net is a RL one+a new architecture. Much better.

Image

This engine is really worth studying. A very interesting architecture.
Thank you Sylwy.
Gabor Szots
CCRL testing group
User avatar
AdminX
Posts: 6340
Joined: Mon Mar 13, 2006 2:34 pm
Location: Acworth, GA

Re: ClassicAra Chess Engine..World Record Download!!

Post by AdminX »

IQ_QI wrote: Thu May 20, 2021 11:11 pm Hello together,
I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
Gabor Szots wrote: Thu May 20, 2021 9:32 pm
Sylwy wrote: Thu May 20, 2021 9:17 pmThe UCI settings on the Arena GUI cannot affect the internal architecture of this engine. If necessary it uses 1-2 or 3 threads.
Too bad. Then it's not suitable for 1-CPU testing.
The option Threads currently describes the number of search threads which allocate the mini-batches.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.

There are also some threads running by default.
  • A main thread which handles user input commands over stdin.
  • A thread manager which logs the current best move to the console every 1s, stops the search threads when the stop command is given and handles the time management.
  • A garbage collector thread which asynchronously frees the memory from the previous search during the current search.
It is possible to use only a single thread for CPU based neural network inference.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
When I tried using int8 precision for ClassicAra 0.9.0 on Windows and Mac for the CPU version, it crashed.
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.

I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.
Hi,

I am trying to compile the latest binary for Windows. I am getting a 'Torch Error' Directory not Found. I have downloaded the src for pytorch-1.8.1 and point cmake toward C:\Users\username\Desktop\pytorch-1.8.1\cmake. Can you help be figure out what I am missing?

Thanks

Image

Image

Image

UPDATE: My eyes are killing me, just noticed file is not there. It's just a .ini file. I still need assistance if you have time.
"Good decisions come from experience, and experience comes from bad decisions."
__________________________________________________________________
Ted Summers
Chessqueen
Posts: 5589
Joined: Wed Sep 05, 2018 2:16 am
Location: Moving
Full name: Jorge Picado

Re: ClassicAra Chess Engine..World Record Download!!

Post by Chessqueen »

Gabor Szots wrote: Fri May 21, 2021 7:54 am
IQ_QI wrote: Thu May 20, 2021 11:11 pmIt is possible to use only a single thread for CPU based neural network inference.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1.
Thank you Johannes. That works!

I still get a low estimate for ClassicAra CPU against Fruit after I made the change, but the ClassisAra used by TCEC is about 200 elo stronger :shock:
Halogen, Koivisto and ClassiAra 0.9.2 post1 are leading https://tcec-chess.com/

Engine Score Fr Cl S-B
1: Fruit2.2.1 12.0/20 ···················· 01=1=110=1=10101=10= 96.00
2: ClassicAra 8.0/20 10=0=001=0=01010=01= ···················· 96.00

20 games played / Tournament is finished
Name of the tournament: Fruit Tournament
Site/ Country: MININT-UB2PIMJ, United States
Level: Blitz 5/2
Hardware: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 15.9 GB Memory
Operating system: Windows 10 Enterprise Professional (Build 9200) 64 bit
PGN-File: C:\Program Files (x86)\Arena\Tournaments\FruitTournament.pgn
Who is 17 years old GM Gukesh 2nd at the Candidate in Toronto?
https://indianexpress.com/article/sport ... t-9281394/
Gabor Szots
Posts: 1364
Joined: Sat Jul 21, 2018 7:43 am
Location: Szentendre, Hungary
Full name: Gabor Szots

Re: ClassicAra Chess Engine..World Record Download!!

Post by Gabor Szots »

Since I have introduced that environment variable results have deteriorated considerably, which is perhaps not a surprise. Anyway, this engine seems to be similar to Lc0 in that it plays much worse with CPU than with GPU.
Gabor Szots
CCRL testing group
IQ_QI
Posts: 25
Joined: Wed Dec 05, 2018 8:51 pm
Full name: Johannes Czech

Re: ClassicAra Chess Engine..World Record Download!!

Post by IQ_QI »

I am trying to compile the latest binary for Windows. I am getting a 'Torch Error' Directory not Found. I have downloaded the src for pytorch-1.8.1 and point cmake toward C:\Users\username\Desktop\pytorch-1.8.1\cmake. Can you help be figure out what I am missing?
Hello,

as stated in the wiki pages about the build instructions, only a single back-end should be activated at a time.
Only one of these mode can be active at a time with the exception that BACKEND_MXNET and BACKEND_TENSORRT can both be ON at a time. By default the native TensorRT back-end without MXNet is used.
Do you want to build it for CPU or GPU? The easiest way of building it, is using the GPU TensorRT back-end.
Building using the Torch-backend is also possible, but the Torch back-end is still experimental and the neural network models for chess are only available in the ONNX and MXNet format so far.
The Torch-backend is also much slower than the TensorRT back-end.

According to this issue, there have been changes in the TensorRT-API for version 8.0.
The latest version which is currently supported is TensorRT-7.2.3.4.
Chessqueen
Posts: 5589
Joined: Wed Sep 05, 2018 2:16 am
Location: Moving
Full name: Jorge Picado

Re: ClassicAra Chess Engine..World Record Download!!

Post by Chessqueen »

IQ_QI wrote: Fri May 21, 2021 6:46 pm
I am trying to compile the latest binary for Windows. I am getting a 'Torch Error' Directory not Found. I have downloaded the src for pytorch-1.8.1 and point cmake toward C:\Users\username\Desktop\pytorch-1.8.1\cmake. Can you help be figure out what I am missing?
[pgn]
In the Last game of ClassicAra 0.9.2 post1 it trapped its own Queen, there must be a Bug ==>
[Event "TCEC Season 21 - League 4"]
[Site "https://tcec-chess.com"]
[Date "2021.05.24"]
[Round "22.1"]
[White "Drofa 3.0.0"]
[Black "ClassicAra 0.9.2.post1"]
[Result "1-0"]
[Annotator "archive"]

1. e4 c5 2. c3 e6 3. d4 d5 4. e5 Bd7 5. Be2 Nc6 6. Nf3 Nge7 7. O-O cxd4 8. cxd4 Nf5 9. Nc3 a6 10. g4 Nh4 11. Nxh4 Qxh4 12. Be3 h5 13. g5 Ne7 14. Qd3 Nf5 15. f4 Bb4 16. a4 Rc8 17. Kg2 Rc4 18. Qd2 Nxe3+ 19. Qxe3 Bxc3 20. bxc3 Rxa4 21. Rxa4 Bxa4 22. Rf3 Bc2 23. Rh3 Qxh3+ 24. Qxh3 Bf5 25. Qe3 O-O 26. Bxh5 g6 27. Be2 a5 28. Qc1 b5 29. Qa1 a4 30. Bxb5 Ra8 31. Qa3 Bc2 32. Qe7 a3 33. Be8 Be4+ 34. Kf1 Bc2 35. Qxf7+ Kh8 36. Qf8+ Kh7 37. Bxg6+ Kxg6 38. Qxa8 Bd3+ 39. Kf2 a2 40. Qxa2 Kf5 41. Qa7 Kxf4 42. g6 Bxg6 43. Qg7 Bh5 44. Qh6+ Kf5 45. Qxh5+ Kf4 46. h4 Ke4 47. Qf3#[/pgn]
Who is 17 years old GM Gukesh 2nd at the Candidate in Toronto?
https://indianexpress.com/article/sport ... t-9281394/