CrazyAra, ClassicAra, MultiAra 0.9.5 release

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

carldaman
Posts: 2283
Joined: Sat Jun 02, 2012 2:13 am

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by carldaman »

Dokterchen, you're right, thanks! Those are big files and were not included with the engine package.
carldaman
Posts: 2283
Joined: Sat Jun 02, 2012 2:13 am

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by carldaman »

I unzipped the downloaded model package into the proper folder, but it still crashes, despite acknowledging that the files are there.

Then I tried to install it on another puter that runs Win10, and here it gave no error, but refused to run. I also tried isready from the command console but the window closed after 1 sec, again giving no error.

Too baffled by these experiences, but hoping future releases will be less buggy. The engine certainly has a very nice playing style!
Dokterchen
Posts: 133
Joined: Wed Aug 15, 2007 12:18 pm
Location: Munich

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by Dokterchen »

carldaman wrote: Tue Aug 31, 2021 6:38 am I unzipped the downloaded model package into the proper folder, but it still crashes, despite acknowledging that the files are there.

Then I tried to install it on another puter that runs Win10, and here it gave no error, but refused to run. I also tried isready from the command console but the window closed after 1 sec, again giving no error.

Too baffled by these experiences, but hoping future releases will be less buggy. The engine certainly has a very nice playing style!
I have downloaded: ClassicAra-sl-model-wdlp-rise3.3-input3.0.zip

And moved the model files into this folder:

C:\Data\Engines\CrazyAra_ClassicAra_MultiAra_0.9.5_Win_TensorRT\model\ClassicAra\chess

It looks so:

Image
carldaman
Posts: 2283
Joined: Sat Jun 02, 2012 2:13 am

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by carldaman »

My folder lacks the .trt files. Anyway, I didn't want to use the gpu version, but the cpu MKL package which worked for prior versions.
IQ_QI
Posts: 25
Joined: Wed Dec 05, 2018 8:51 pm
Full name: Johannes Czech

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by IQ_QI »

carldaman wrote: Wed Sep 01, 2021 1:13 am My folder lacks the .trt files. Anyway, I didn't want to use the gpu version, but the cpu MKL package which worked for prior versions.
Hello carldaman,
yes, it looks like the MXNet library provided in the Windows version is unable to load the new ClassicAra-sl-model-wdlp-rise3.3-input3.0 model.
I am sorry for the inconvenience. All other models (MultiAra models, and CrazyAra model) should load fine in the Windows CPU version.
However, the Mac and Linux CPU version should be able to load the new ClassicAra model.
I tried to build a newer MXNet backend library on Windows but it failed due to several out of memory compiler errors for me, despite having 32 GiB RAM).

As an alternative, I started to add a OpenVino backend.
The OpenVino API can be used with CPUs (AMD & Intel), the Intel Neural Compute Stick and Intel GPUs. So there will probably be a release 0.9.5.post0 soon to provide a working ClassicAra CPU Windows version.
IQ_QI
Posts: 25
Joined: Wed Dec 05, 2018 8:51 pm
Full name: Johannes Czech

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by IQ_QI »

dkappe wrote: Thu Aug 26, 2021 6:56 pm I’ve used lichess data for training Bad Gyal. One of the challenges I faced is that agreed draws and resignations meant I didn’t have enough endgame and imbalanced positions. To that end, I played all the kingbase 2300+ FIDE games out to terminal with sf10 100k nodes. Made a huge difference in the training.

The data can be found here: https://github.com/dkappe/leela-chess-w ... -Gyal-Data
Thank you for the idea and data link.
I can imagine that playing out games with SF10 greatly improves data quality.

We also tried training on Multi-Variant-Stockfish generated crazyhouse games back in 2019 as an experiment and it worked reasonably fine. However, I want to avoid using search results of 3rd-party engines as training targets for official Ara-networks.
dkappe
Posts: 1631
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by dkappe »

IQ_QI wrote: Wed Sep 01, 2021 8:52 pm
dkappe wrote: Thu Aug 26, 2021 6:56 pm I’ve used lichess data for training Bad Gyal. One of the challenges I faced is that agreed draws and resignations meant I didn’t have enough endgame and imbalanced positions. To that end, I played all the kingbase 2300+ FIDE games out to terminal with sf10 100k nodes. Made a huge difference in the training.

The data can be found here: https://github.com/dkappe/leela-chess-w ... -Gyal-Data
Thank you for the idea and data link.
I can imagine that playing out games with SF10 greatly improves data quality.

We also tried training on Multi-Variant-Stockfish generated crazyhouse games back in 2019 as an experiment and it worked reasonably fine. However, I want to avoid using search results of 3rd-party engines as training targets for official Ara-networks.
I hear you. I’d just note that outcomes were barely changed. 99.999% of draws were still draws and so on. But in order to play two queens up, you have to have some of those positions in your games. You can use almost any modern engine to the same ends, like Rodent or Xiphos. I just used SF10 because it was fastest.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Chessqueen
Posts: 5582
Joined: Wed Sep 05, 2018 2:16 am
Location: Moving
Full name: Jorge Picado

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by Chessqueen »

dkappe wrote: Wed Sep 01, 2021 9:22 pm
IQ_QI wrote: Wed Sep 01, 2021 8:52 pm
dkappe wrote: Thu Aug 26, 2021 6:56 pm I’ve used lichess data for training Bad Gyal. One of the challenges I faced is that agreed draws and resignations meant I didn’t have enough endgame and imbalanced positions. To that end, I played all the kingbase 2300+ FIDE games out to terminal with sf10 100k nodes. Made a huge difference in the training.

The data can be found here: https://github.com/dkappe/leela-chess-w ... -Gyal-Data
Thank you for the idea and data link.
I can imagine that playing out games with SF10 greatly improves data quality.

We also tried training on Multi-Variant-Stockfish generated crazyhouse games back in 2019 as an experiment and it worked reasonably fine. However, I want to avoid using search results of 3rd-party engines as training targets for official Ara-networks.
I hear you. I’d just note that outcomes were barely changed. 99.999% of draws were still draws and so on. But in order to play two queens up, you have to have some of those positions in your games. You can use almost any modern engine to the same ends, like Rodent or Xiphos. I just used SF10 because it was fastest.
How come CrazyAra can not be downloaded in one step and ready to play like other engines, instead of having to do all this ==>

"The binary packages include the required inference libraries for each platform.
However, the models should be downloaded separately and unzipped.

CrazyAra-rl-model-os-96.zip
ClassicAra-sl-model-wdlp-rise3.3-input3.0.zip
MultiAra-rl-models.zip (improved MultiAra models using reinforcement learning (rl) )
MultiAra-sl-models.zip (initial MultiAra models using supervised learning)

Next, move the model files into the model/<engine-name>/<variant> folder
Do NOT worry and be happy, we all live a short life :roll:
IQ_QI
Posts: 25
Joined: Wed Dec 05, 2018 8:51 pm
Full name: Johannes Czech

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by IQ_QI »

Chessqueen wrote: Wed Sep 01, 2021 9:24 pm
dkappe wrote: Wed Sep 01, 2021 9:22 pm
IQ_QI wrote: Wed Sep 01, 2021 8:52 pm
dkappe wrote: Thu Aug 26, 2021 6:56 pm I’ve used lichess data for training Bad Gyal. One of the challenges I faced is that agreed draws and resignations meant I didn’t have enough endgame and imbalanced positions. To that end, I played all the kingbase 2300+ FIDE games out to terminal with sf10 100k nodes. Made a huge difference in the training.

The data can be found here: https://github.com/dkappe/leela-chess-w ... -Gyal-Data
Thank you for the idea and data link.
I can imagine that playing out games with SF10 greatly improves data quality.

We also tried training on Multi-Variant-Stockfish generated crazyhouse games back in 2019 as an experiment and it worked reasonably fine. However, I want to avoid using search results of 3rd-party engines as training targets for official Ara-networks.
I hear you. I’d just note that outcomes were barely changed. 99.999% of draws were still draws and so on. But in order to play two queens up, you have to have some of those positions in your games. You can use almost any modern engine to the same ends, like Rodent or Xiphos. I just used SF10 because it was fastest.
How come CrazyAra can not be downloaded in one step and ready to play like other engines, instead of having to do all this ==>

"The binary packages include the required inference libraries for each platform.
However, the models should be downloaded separately and unzipped.

CrazyAra-rl-model-os-96.zip
ClassicAra-sl-model-wdlp-rise3.3-input3.0.zip
MultiAra-rl-models.zip (improved MultiAra models using reinforcement learning (rl) )
MultiAra-sl-models.zip (initial MultiAra models using supervised learning)

Next, move the model files into the model/<engine-name>/<variant> folder
I see your point and I understand that this is a bit inconvenient.
However, I wanted to avoid uploading the same models multiple times for all available release types (Linux, Windows, Mac both CPU and GPU version) as it requires a lot more memory on the GitHub webspace.

Moreover, I think not all people are interested in downloading models for all available variants.
The same model is currently also available multiple times for different batch-size which is also not ideal.

I am considering uploading a default network for ClassicAra directly into the binary package again for future versions.
IQ_QI
Posts: 25
Joined: Wed Dec 05, 2018 8:51 pm
Full name: Johannes Czech

Re: CrazyAra, ClassicAra, MultiAra 0.9.5 release

Post by IQ_QI »

dkappe wrote: Wed Sep 01, 2021 9:22 pm
IQ_QI wrote: Wed Sep 01, 2021 8:52 pm
dkappe wrote: Thu Aug 26, 2021 6:56 pm I’ve used lichess data for training Bad Gyal. One of the challenges I faced is that agreed draws and resignations meant I didn’t have enough endgame and imbalanced positions. To that end, I played all the kingbase 2300+ FIDE games out to terminal with sf10 100k nodes. Made a huge difference in the training.

The data can be found here: https://github.com/dkappe/leela-chess-w ... -Gyal-Data
Thank you for the idea and data link.
I can imagine that playing out games with SF10 greatly improves data quality.

We also tried training on Multi-Variant-Stockfish generated crazyhouse games back in 2019 as an experiment and it worked reasonably fine. However, I want to avoid using search results of 3rd-party engines as training targets for official Ara-networks.
I hear you. I’d just note that outcomes were barely changed. 99.999% of draws were still draws and so on. But in order to play two queens up, you have to have some of those positions in your games. You can use almost any modern engine to the same ends, like Rodent or Xiphos. I just used SF10 because it was fastest.
That is a valid point.
We even noticed a similar behaviour when learning Horde via reinforcement learning.
One good example for this, is a game between MultiAra and Fairy-Stockfish on lichess.org: Although MultiAra appears to be about more than 200 Elo stronger than Fairy-Stockfish (classical evaluation) in Horde, it failed to convert a basic endgame (see Figure 5.3.: Elo difference of MultiAra against Fairy-Stockfish, Assessing Popular Chess Variants Using Deep Reinforcement Learning)

We think that this is also due to the fact that the number of games won by White during self play has drastically decreased over time in Horde (see Figure 5.6.: Game outcome of human vs self-play games., Assessing Popular Chess Variants Using Deep Reinforcement Learning).
Moreover, the architecture of a convolutional neural network does not seem to be ideal for dealing with wide-open board positions, since convolutional filters are translation invariant.