It was Dietrich Kappe restarting a 9 day old conversation.Guenther wrote: ↑Sat Nov 21, 2020 8:50 am I seems this thread was hijacked for speculations about nnue.
(especially Komodo ones, which never was a matter in this thread before at all and shouldn't,
as I never announce commercial releases).
I suggest to spllt that part away from the original thread.
Somehow it started with some dropping in by 'Madeleine'.
Guenther
Speculations about NNUE development
Moderators: hgm, Rebel, chrisw
-
- Posts: 512
- Joined: Tue Sep 29, 2020 4:29 pm
- Location: Dublin, Ireland
- Full name: Madeleine Birchfield
Re: New engine releases 2020
-
- Posts: 1631
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: New engine releases 2020
If something from from the newspapers of 1957 is new.AndrewGrant wrote: ↑Sat Nov 21, 2020 8:19 amRage? What. Also, interesting phrase, "baseless speculations". "baseless accusations" is a thing, but baseless speculations? That is new.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 31
- Joined: Tue Feb 27, 2018 11:29 am
Re: New engine releases 2020
I'm glad you like it!dkappe wrote: ↑Sat Nov 21, 2020 7:34 amP.S. On a more useful note, I’ve started using Tord Romstad’s excellent Chess.jl library (https://github.com/romstad/Chess.jl), though it has one major castling bug that I’m working to fix. Pretty speedy for stuff like qsearch.
Could you please let me know what that that castling bug is? Since I'm presumably more familiar with the code than you are, I think I could fix it quite easily.
Edit: Nevermind, I just saw that there's a GitHub issue on it. I'll have a look.
-
- Posts: 1631
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: New engine releases 2020
Just as I created a pull request. Thanks for fixing it.Tord wrote: ↑Wed Nov 25, 2020 3:04 pmI'm glad you like it!dkappe wrote: ↑Sat Nov 21, 2020 7:34 amP.S. On a more useful note, I’ve started using Tord Romstad’s excellent Chess.jl library (https://github.com/romstad/Chess.jl), though it has one major castling bug that I’m working to fix. Pretty speedy for stuff like qsearch.
Could you please let me know what that that castling bug is? Since I'm presumably more familiar with the code than you are, I think I could fix it quite easily.
Edit: Nevermind, I just saw that there's a GitHub issue on it. I'll have a look.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 5566
- Joined: Tue Feb 28, 2012 11:56 pm
Re: New engine releases 2020
Chess wouldn't have been invented since people would have been busy all day hunting small game. (No way you can kill a mammoth only using your own muscle power and your own ideas.)OliverBr wrote: ↑Thu Nov 12, 2020 5:59 amI wonder what the "true limit" would be, when everybody was only using his own code based on his own ideas.Madeleine Birchfield wrote: ↑Wed Nov 11, 2020 11:51 pm With the advent of strong and fast neural network based evaluation functions, I think the 3000 elo limit is too low, and the new limit should be 3200 elo or something.
-
- Posts: 154
- Joined: Sun Jan 20, 2019 11:23 am
- Full name: kek w
Re: Speculations about NNUE development
Once we figure out how to beat SF master net with the tools we have available there will be a "multinet" which replaces the NNUE+classical hybrid approach. I've proven that it is easy to beat classical eval with NNUE in node vs node while also being faster than classical eval. Note that there currently there are conflicts between the initial implementation and current optimisations. It has a 256+x input slice + y hidden layers. Then we decide whether we use the 256 input slice or the smaller input slice (e.g. 64) with a 16x ReLu layer.
In case you want to take a look at the code here it is: https://github.com/Sopel97/Stockfish/tree/multinet3
We don't have a way to train both parts of the net at once (yet). As I said it needs to be ported to nodchip-master as without the current optimisations it loses much more speed than it gains. We have checked with a "fake multinet" with up to date code and it reached 99.x% of NNUE speed so I'm fairly confident it will work.
In case you want to take a look at the code here it is: https://github.com/Sopel97/Stockfish/tree/multinet3
We don't have a way to train both parts of the net at once (yet). As I said it needs to be ported to nodchip-master as without the current optimisations it loses much more speed than it gains. We have checked with a "fake multinet" with up to date code and it reached 99.x% of NNUE speed so I'm fairly confident it will work.