Booot progress

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
booot
Posts: 56
Joined: Sun Jul 03, 2016 8:29 pm

Re: Booot progress

Post by booot » Thu Jun 17, 2021 7:14 am

Next model - 'tiny'. Both kings have their own frame : 2 blocks of 770 features (with castle rights features) for 'with my queen on the board' and 'without my queen on the board' state.

Net structure: [2x1540 features]-2x128-32-32-1 (was better then 256-32-32-1)
Trainable params: 403,809

Last 3 epochs:

Epoch 8 ended. Loss-506.30879698096993 mae-16.062606426546996 val_loss-507.3945683053167 val_mae-16.07592207179081
Epoch 9 ended. Loss-503.61318576974355 mae-16.018622826805693 val_loss-505.09183624779865 val_mae-16.039079238470794
Epoch 10 ended. Loss-501.40049000838394 mae-15.98226287210588 val_loss-502.8985820864793 val_mae-16.001325256628192

As i see this model is more precise for the same training dataser - the final mean error is 1.6% instead of 1.63% with zero model.

Next step - the model, taking in account dependencies with own king location (king's file)

booot
Posts: 56
Joined: Sun Jul 03, 2016 8:29 pm

Re: Booot progress

Post by booot » Mon Jun 21, 2021 7:47 am

Next results = model 'small'. Each frame has 8 blocks for own king on every file from a to h.

Net structure [2*6160 features]-2x128-32-32-1
Trainable params: 1,586,529

Last 3 epochs:

Epoch 12 ended. Loss-496.9899788909107 mae-16.01389817406329 val_loss-503.9065640066072 val_mae-16.10437430923791
Epoch 13 ended. Loss-494.93751907653564 mae-15.979766496556174 val_loss-501.9716871766259 val_mae-16.06937743643586
Epoch 14 ended. Loss-493.18555897469525 mae-15.950437666986772 val_loss-500.53513744180435 val_mae-16.04783524178582

This model is bigger, so saturation was later. Model also shows better results then zero model (Mean Error is 1,604% instead of 1,636% with zero model).

Next step - combination of last 2 models: king's file + queen presence features.

booot
Posts: 56
Joined: Sun Jul 03, 2016 8:29 pm

Re: Booot progress

Post by booot » Sat Jun 26, 2021 10:45 pm

Results of 'medium model':

Each frame has 16 blocks for each side : 8 for 'my queen on the board' 8 for "no my queen on the board". Like in previous model 8 blocks - for own king on files a-h.

Net structure : [2x12320 features]-2x128-32-32-1
Trainable params: 3,163,489

Last 3 epochs:

Epoch 14 ended. Loss-485.3103777476066 mae-15.814280650217375 val_loss-500.2106061732645 val_mae-15.994519196349463
Epoch 15 ended. Loss-483.29042010532197 mae-15.784154272193817 val_loss-498.68690831026584 val_mae-15.977867512203616
Epoch 16 ended. Loss-481.4337198698073 mae-15.757449858289638 val_loss-496.85648897873887 val_mae-15.948416264699421

Model shows better results then all previous, but seems there is lack of training data here : training loss and validation loss differs.
So i will not try bigger model this time.
So, the last model to test in my list - "phase" model. Depending on material left on the board.

booot
Posts: 56
Joined: Sun Jul 03, 2016 8:29 pm

Re: Booot progress

Post by booot » Wed Jun 30, 2021 4:20 pm

So, the last model tested- 'phase' model. Each frame has 6 blocks depending from material left on the board
Net structure: [2x4620 features]-2x128-32-32-1
Trainable params: 1,192,289

Last 3 epochs:

Epoch 13 ended. Loss-496.69758990171147 mae-15.87696307645046 val_loss-502.5254309003969 val_mae-15.949056295277499
Epoch 14 ended. Loss-494.7817646273606 mae-15.846662281228484 val_loss-500.6967051358913 val_mae-15.918650920633123
Epoch 15 ended. Loss-493.11734691836375 mae-15.82033125178324 val_loss-499.3411554447848 val_mae-15.896476461256531

Also not bad model to implement. But i have to test it later with file-depending schema. I need much bigger dataset for it.

I begin new phase - "The Battle". I will try to make quantization-aware training to make nets for Booot and test them inside engine. I have chosen 2 models : 'tiny' (to train it on existing 250M dataset) and 'medium' for future bigger dataset after 'tiny'. Now i understand the capacity of both models and will try to receive quantized nets as close as possible.

Wish me good luck!

User avatar
Kotlov
Posts: 254
Joined: Fri Jul 10, 2015 7:23 pm
Location: Russia

Re: Booot progress

Post by Kotlov » Wed Jun 30, 2021 5:54 pm

booot wrote:
Wed Jun 30, 2021 4:20 pm
Wish me good luck!
Удачи, бро!
Eugene Kotlov
Hedgehog 2.1 64-bit coming soon...

booot
Posts: 56
Joined: Sun Jul 03, 2016 8:29 pm

Re: Booot progress

Post by booot » Mon Jul 19, 2021 11:19 am

After some attempts and fails i have got a test quantized net for Booot! It has 4 layers (feature layer)-256-32-32-1 where all weights except feature layer i was able to make int8. The precision of this net is quite high! So, after few weeks of my holiday i am ready to remake a full net (tiny model) and continue the work. Meanwhile... the chess data is generating!

User avatar
CMCanavessi
Posts: 1080
Joined: Thu Dec 28, 2017 3:06 pm
Location: Argentina

Re: Booot progress

Post by CMCanavessi » Wed Aug 04, 2021 2:34 pm

Do you have plans for FRC support in Booot ?
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls

Post Reply