ShashChess 11.0 released
https://github.com/amchess/ShashChess/releases/tag/11.0
https://github.com/amchess/ShashChess/wiki/Matches
ShashChess
Moderators: hgm, Rebel, chrisw
-
- Posts: 365
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
Re: ShashChess
Thank you, Andrea. I hope you will be successful in developing ShashChess.
-
- Posts: 328
- Joined: Tue Dec 05, 2017 2:42 pm
Re: ShashChess
It's a pleasure to annonce the new ShashChess 12 release with a lot of novelties:
-completely revised Shashin theory so that we reached a main goal of our development:
the best tactical solver (better than Houdini tactical) without loosing in strength compared to Stockfish: no other button needed!
Our tests results will follow.
In particular, many thanks to JHellis and its latest Crystal derivative.
No more contempt: it's integrated in Shashin theory and we have no rollcoaster effect: the score is truly aligned with the gui
We demonstrated the effectiveness of Shashin theory at no fast time controls and there is yet a very large margin of improvement:
a lot of discarded Stockfish community patches (thanks clearly for their immense work)can be well integrated thanks to Shashin theory,
improving game play and/or hard positions management.
-new Self Q-learning, a learning variant optimized for self play
-improved LiveBook management
-latest Stockfish patch
Author: Joost VandeVondele
Date: Sat Jun 27 10:22:27 2020 +0200
Timestamp: 1593246147
Revert LTO for mingw on windows.
https://github.com/amchess/ShashChess/releases/tag/12.0
-completely revised Shashin theory so that we reached a main goal of our development:
the best tactical solver (better than Houdini tactical) without loosing in strength compared to Stockfish: no other button needed!
Our tests results will follow.
In particular, many thanks to JHellis and its latest Crystal derivative.
No more contempt: it's integrated in Shashin theory and we have no rollcoaster effect: the score is truly aligned with the gui
We demonstrated the effectiveness of Shashin theory at no fast time controls and there is yet a very large margin of improvement:
a lot of discarded Stockfish community patches (thanks clearly for their immense work)can be well integrated thanks to Shashin theory,
improving game play and/or hard positions management.
-new Self Q-learning, a learning variant optimized for self play
-improved LiveBook management
-latest Stockfish patch
Author: Joost VandeVondele
Date: Sat Jun 27 10:22:27 2020 +0200
Timestamp: 1593246147
Revert LTO for mingw on windows.
https://github.com/amchess/ShashChess/releases/tag/12.0
-
- Posts: 365
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
Re: ShashChess
Dear Andrea, I just downloaded ShashChess 12, but the engine BrainLearn is in the zip archive.
-
- Posts: 2801
- Joined: Mon Feb 11, 2008 3:53 pm
- Location: Denmark
- Full name: Damir Desevac
Re: ShashChess
Thanks a lot for all your work.
-
- Posts: 328
- Joined: Tue Dec 05, 2017 2:42 pm
-
- Posts: 3186
- Joined: Sat Feb 16, 2008 7:38 am
- Full name: Peter Martan
Re: ShashChess
Thanks for the new version.
Just one question to new kind of learning.
Which option should be used for experience out of backward analysis, Standard or Self?
Peter.
-
- Posts: 328
- Joined: Tue Dec 05, 2017 2:42 pm
Re: ShashChess
-
- Posts: 3186
- Joined: Sat Feb 16, 2008 7:38 am
- Full name: Peter Martan
Re: ShashChess
Thanks for the prompt answer, but which option would make experience- file grow faster at Backward?
If learning from selfplay is different from gameplay, is backward analysis then to be compared rather with selfplay or with gameplay?
How about interactive Forward- Backward? More experience- data will be stored then by Standard or by Self?
Peter.
-
- Posts: 328
- Joined: Tue Dec 05, 2017 2:42 pm
Re: ShashChess
Standard experience file accumulates more data than self-play.peter wrote: ↑Mon Jun 29, 2020 1:22 amThanks for the prompt answer, but which option would make experience- file grow faster at Backward?
If learning from selfplay is different from gameplay, is backward analysis then to be compared rather with selfplay or with gameplay?
How about interactive Forward- Backward? More experience- data will be stored then by Standard or by Self?
Self-play don't use depth information. It's based on previous score and updates itself continuously by a Reinforcement learning.
So, you can use even an experience file with depth filled: simply it will not be used.