I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
The option Threads currently describes the number of search threads which allocate the mini-batches.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.
There are also some threads running by default.
- A main thread which handles user input commands over stdin.
- A thread manager which logs the current best move to the console every 1s, stops the search threads when the stop command is given and handles the time management.
- A garbage collector thread which asynchronously frees the memory from the previous search during the current search.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
When I tried using int8 precision for ClassicAra 0.9.0 on Windows and Mac for the CPU version, it crashed.My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.
I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.