Mayhem NNUE - New NN engine

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

connor_mcmonigle
Posts: 530
Joined: Sun Sep 06, 2020 4:40 am
Full name: Connor McMonigle

Re: Mayhem NNUE - New NN engine

Post by connor_mcmonigle »

AndrewGrant wrote: Fri Nov 06, 2020 6:23 am
Madeleine Birchfield wrote: Fri Nov 06, 2020 6:06 am Instead of copying and pasting somebody else's NNUE code, how about implementing the NNUE algorithm yourself, like what the Komodo, Halogen, and Seer authors did?
Halogen is not using NNUE.
We do not know how Komodo was trained.
I don't know about Seer.
(Sorry for the mini essay, but hopefully this clarifies things.)

Halogen is efficiently updating the first layer of its network which I guess makes it an "NNUE", but all this terminology is getting terribly confused by people referring to the Stockfish network architecture and implementation by the term "NNUE".

When Nodchip first ported all the NN training code from computer Shogi to computer chess, there were two input types for efficiently updating neural networks introduced which were given the names "KP" and "halfKP".

Halogen is using a shallow (relative to the KP networks used in computer Shogi) MLP using KP input trained on (AFAIK) Ethereal generated training data using optimized training code written by Andrew. Seer is using a deep (relative to the halfKP networks used in computer Shogi), MLP with concatenation skip connections as well as some other notable differences. It uses halfKA input as described earlier. Seer's networks are trained using a relatively slow PyTorch script I wrote and trained on Stockfish generated data (for now). I'm currently working on switching to training only on selfplay games tabula rasa style. Additionally, I hope to make some modifications to Andrew's awesome training code so that it can produce Seer compatible networks sometime in the near future.

Neither project relies on any Stockfish code nor contains any rewritten Stockfish code, though both projects use the "efficiently updating" trick. By contrast, Komodo is just using rewritten Stockfish code and doing nothing original/interesting much like this Mayhem NNUE project which is just copying pasting the SF evaluation function.

Why's no one else doing as Halogen's author and I have done?

Because it's hard and it requires a fair bit of domain knowledge (nothing one couldn't learn if they were actually willing to actually invest some effort though).

Additionally, it's not guaranteed to result in an evaluation function significantly stronger than a HCE. In my understanding, Halogen's evaluation function is significantly weaker on equal nodes when compared to the HCEs found in many 2900+ elo engines, though is quite fast to evaluate. Gauging by Minic's author's experiments with using my network code in Minic, Seer's networks are around +120 elo on equal nodes to an HCE found in a 2900+ elo engine, but are fairly slow to evaluate.
Last edited by connor_mcmonigle on Fri Nov 06, 2020 8:04 am, edited 1 time in total.
Daniel Anulliero
Posts: 759
Joined: Fri Jan 04, 2013 4:55 pm
Location: Nice

Re: Mayhem NNUE - New NN engine

Post by Daniel Anulliero »

JohnWoe wrote: Fri Oct 23, 2020 11:03 am
mvanthoor wrote: Fri Oct 23, 2020 10:52 am
JohnWoe wrote: Thu Oct 22, 2020 11:40 pm Mayhem is Sapeli written in C++14 + SF NNUE evaluation. Thanks to Maksim simplifying off DirtyPiece/etc crap!
My engine is quite fast with regard to search already. It can achieve depth 10 in under 15 seconds in many positions WITHOUT even having a TT or other search optimizations yet. The main weakness is that it only has material count + PSQT for an evaluation. I could implement some things such as LMR and other search tricks, a TT, and stick a NNUE to that search and call it a day. That engine would probably be at least 2500 Elo.

God I hate NNUE. All those engines (and the GPU-engines as well) should be transferred to their own rating list(s), but I digress. There are other topics to discuss this.
Even if you plug in the NNUE it doesn't take anything away from Rustic! You could use that classical evaluation to deliver a quick checkmate/time trouble.
Evaluation is the hardest part to get right. To me the most boring job.
For me , working on the evaluation fonction is the most interresting job 😉 thats why my engine have a poor search, and it is a little bit weak. But I think its the eval it give to engine his personnal playing style. I agree with Brendan and Mvanthoor 😉
I took a long break , but now I'm trying to rewrite my eval (must be the 40th time May be , lol)
Hopefully , we have Graham's tournaments where we can see "normal" engines playing in live, enjoying 😊
Best
Dany
Isa download :
IanKennedy
Posts: 55
Joined: Sun Feb 04, 2018 12:38 pm
Location: UK

Re: Mayhem NNUE - New NN engine

Post by IanKennedy »

"All my efforts in tuning Sapeli's HCE were all in vain."

You've got it the wrong way around. I like Sapeli and its releases are far more interesting to me, generally the less Stockfish like in strength the more interest I find. You should take pride in it.
Author of the actively developed PSYCHO chess engine
JohnWoe
Posts: 491
Joined: Sat Mar 02, 2013 11:31 pm

Re: Mayhem NNUE - New NN engine

Post by JohnWoe »

Madeleine Birchfield wrote: Fri Nov 06, 2020 6:06 am Instead of copying and pasting somebody else's NNUE code, how about implementing the NNUE algorithm yourself, like what the Komodo, Halogen, and Seer authors did?
That's a tall order.

On top of the move generator code, protocol handling code, search code + million other things. You know things that makes a chess engine. I would need to write some instructions heavy nnue forward pass NNUE evaluation. Some backpropagation algorithm with Tensorflow. Generate a few million FENs with output. Then burn a few trillion CPU cycles to minimize the loss function.

That's a lot of work for one man. For some hobby project.

I'm not writing my own GNU/Linux OS either.
Neither my main editor Vim.

Recently I programmed a simple PI digits program. I didn't reinvent Chudnovski algorithm from thin air.

Code: Select all

Hello from chudnovski-pi.py!
Calculate pi using Chudnovsky algorithm
Enter the number of digits of PI you want to know or type 'exit' to quit
> 7
3.141592
> 100
3.141592653589676253460425927962363119916001375423355262011051181644063904982648340947121517957145883
> exit
mvanthoor wrote: Fri Nov 06, 2020 2:31 am
JohnWoe wrote: Thu Nov 05, 2020 11:37 pm I still can't believe my little program beats a heavy weight. 2700 Elo on CCRL. Sapeli gave his life for a good cause.
So you did take Sapeli off-line in Github to replace it by Mayhem? Pity. I liked Sapeli. It was a nice little engine, quite strong for its size. I made it a target to beat with Rustic. And, to do so by using the least amount of features possible... I'm going to try and see if I can beat TSCP by brute force...by NOT adding pawn knowledge and open files into my evaluation until after. I had a similar goal in mind for Sapeli, but now with Mayhem, it's of no use because your engine gained 500 Elo or thereabout overnight.

If you want to put your efforts into Mayhem, that's up to you; but why would you take Sapeli off-line because of that?
I mainly focus on Mayhem now. Maintaining 2 similar chess engines is too much for one man. I don't think it will come back. Sapeli will always stay 100% original. Sapeli was really fast but dumb due to bad evaluation. Your engine will not have problems beating Sapeli!

Mayhem is Sapeli except NNUE evaluation which I shamelessly copied. I like C++ more than C. Because I can write more compact code with C++. That reduces programmer errors. I write in imperative style and use C++ tools whenever it makes sense.

I've added null move and things like that in Mayhem. Now I put hashtable in heap instead of stack. Even clang++ builds Mayhem no problems. Mayhem is super stable I haven't seen any cashes or things like that.

I've been writing new Ataxx engine in Rust and plan to not copy-paste anything. It will be 100% original.
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Mayhem NNUE - New NN engine

Post by mvanthoor »

JohnWoe wrote: Fri Nov 06, 2020 1:13 pm I mainly focus on Mayhem now. Maintaining 2 similar chess engines is too much for one man. I don't think it will come back. Sapeli will always stay 100% original. Sapeli was really fast but dumb due to bad evaluation. Your engine will not have problems beating Sapeli!
That is still the question. At this point, Rustic is also very fast, but also very dumb. In the 1500-1800 Elo range, Rustic can completely destroy some 1750+ engines tactically because it just outcalculates them by 2-3 ply in the middle game. It is fast enough to even keep up with some engines such as Mizar 3, which already has a transposition table (and Rustic doesn't). On the other hand, Rustic can sometimes also be steamrolled by a 1600 Elo engine because it just lacks any knowledge. It's happy to go chase and capture a knight for two pawns on the other side of the board while the opponent is breaking down Rustic's king safety (which will then lead to a mating attack afterwards... with Rustic's pieces being on the wrong side of the board).
Mayhem is Sapeli except NNUE evaluation which I shamelessly copied. I like C++ more than C. Because I can write more compact code with C++. That reduces programmer errors. I write in imperative style and use C++ tools whenever it makes sense.
You said above that you don't want to maintain Sapeli. You don't have to. Just update the comments, put into it that you forked Sapeli into Mayhem, and then tag it as "Sapeli 2.0 Final." Then you're done. You can just leave the repository and the engine online. There is no reason to take it offline. Now I can't test against Sapeli, or I'll have to dig up some version somewhere from an unknown site.
I've been writing new Ataxx engine in Rust and plan to not copy-paste anything. It will be 100% original.
Cool. I don't know if there are any user interfaces for that, or something like a UCI or XBoard protocol. I didn't know the game before someone mentioned it on this site.
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
Terje
Posts: 347
Joined: Tue Nov 19, 2019 4:34 am
Location: https://github.com/TerjeKir/weiss
Full name: Terje Kirstihagen

Re: Mayhem NNUE - New NN engine

Post by Terje »

mvanthoor wrote: Fri Nov 06, 2020 3:54 pm
I've been writing new Ataxx engine in Rust and plan to not copy-paste anything. It will be 100% original.
Cool. I don't know if there are any user interfaces for that, or something like a UCI or XBoard protocol. I didn't know the game before someone mentioned it on this site.
UAI protocol exists (replace chess with ataxx everywhere and you're basically done), there's a discord for engine programming where ataxx-related help is available :)
AndrewGrant
Posts: 1754
Joined: Tue Apr 19, 2016 6:08 am
Location: U.S.A
Full name: Andrew Grant

Re: Mayhem NNUE - New NN engine

Post by AndrewGrant »

connor_mcmonigle wrote: Fri Nov 06, 2020 8:02 am
Halogen is using a shallow (relative to the KP networks used in computer Shogi) MLP using KP input trained on (AFAIK) Ethereal generated training data using optimized training code written by Andrew.
So, Halogen is not using "KP", aka a mapping of King-Piece's in its network, nor is it using the "Half" idea. Its the equivalent of concatenating each of the piece-color bit-boards and calling that your input layer.
connor_mcmonigle wrote: Fri Nov 06, 2020 8:02 am
Neither project relies on any Stockfish code nor contains any rewritten Stockfish code, though both projects use the "efficiently updating" trick. By contrast, Komodo is just using rewritten Stockfish code and doing nothing original/interesting much like this Mayhem NNUE project which is just copying pasting the SF evaluation function.
So I don't think there is any proof that Komodo is using the Stockfish Trainer. However, it is my belief based on who trained them, and an apparent lack of interest in claiming original training code and methods. But, I suspect that eventually that belief will be stomped out, as it would be extremely disappointing if it is indeed the case that Stockfish's Trainer was reused. Worth noting however that it is perfectly within Komodo team's rights to use Stockfish's Trainer despite Komodo not being a GPL or equivalent covered project.
connor_mcmonigle wrote: Fri Nov 06, 2020 8:02 am
Why's no one else doing as Halogen's author and I have done? Because it's hard and it requires a fair bit of domain knowledge (nothing one couldn't learn if they were actually willing to actually invest some effort though).
I watched like, 4 youtube videos in order to implement back propagation in the NNTrainer repo you and Halogen author have access to. There is this ... endemic issue in the Computer Chess world where people don't actually even attempt to understand machine learning. With the exception of Arasan, I believe I'm responsible for everyone porting their texel tuners to Gradient Decent -- something which should have happened long before I was around. Multiple conversations with others working on NN's seems to show that most just don't have a damn clue. Its my estimation that those who do not understand how to build and train an NN don't deserve to be pasting them.
#WeAreAllDraude #JusticeForDraude #RememberDraude #LeptirBigUltra
"Those who can't do, clone instead" - Eduard ( A real life friend, not this forum's Eduard )
connor_mcmonigle
Posts: 530
Joined: Sun Sep 06, 2020 4:40 am
Full name: Connor McMonigle

Re: Mayhem NNUE - New NN engine

Post by connor_mcmonigle »

AndrewGrant wrote: Fri Nov 06, 2020 6:46 pm ...
So, Halogen is not using "KP", aka a mapping of King-Piece's in its network, nor is it using the "Half" idea. Its the equivalent of concatenating each of the piece-color bit-boards and calling that your input layer.
...
The naming introduced here is pretty bad in my opinion, but the piece plane concatenation (resulting in 768 input features) you describe is actually exactly what the KP features are to the best of my knowledge. This understanding is informed by some discussion I had with Nodchip as well as the initial NNUE code announcment:

Code: Select all

I (Nodchip)  released a new binary set "stockfish-nnue-2020-05-30" for training data generation and training.
https://github.com/nodchip/Stockfish/releases/tag/stockfish-nnue-2020-05-30
Please get it before trying the below.

Training in Stockfish+NNUE consists of two phases, "training data generation phase" and "training phase".

In the training data generation phase, we will create training data with the "gensfen" command.
In the first iteration, we will create training data with the original Stockfish evaluation function.
This can be done with "stockfish.nnue-gen-sfen-from-original-eval.exe" in "stockfish-nnue-2020-05-30".
The command will be like:

uci
setoption name Hash value 32768  <- This value must be lower than the total memory size of your PC.
setoption name Threads value 8  <- This value must be equal to or lower than the number of the logical CPU cores of your PC.
isready
gensfen depth 8 loop 10000000 output_file_name trainingdata\generated_kifu.bin
quit

Before creating the training data, please make a folder for the training data. 
In the command above, the name of the folder is "trainingdata".
The traning data generation takes a long time.  Please be patient.
For detail options of the "gensfen" command, please refer learn/learner.cpp.<- In the source code (src\learn\learner.cpp)

We also need validation data so that we measure if the training goes well. 
The command will be like:

uci
setoption name Hash value 32768
setoption name Threads value 8
isready
gensfen depth 8 loop 1000000 output_file_name validationdata\generated_kifu.bin
quit

Before creating the validation data, please make a folder for the validation data.  
In the command above, the name of the folder is "validationdata".

In the training phase, we will train the NN evalution function with the "learn" command.  Please use "stockfish.nnue-learn-use-blas.k-p_256x2-32-32.exe" for the "learn" command.
In the first iteration, we need to initialize the NN parameters with random values, and learn from learning data.
Setting the SkipLoadingEval option will initialize the NN with random parameters.  The command will be like:

uci
setoption name SkipLoadingEval value true
setoption name Threads value 8
isready
learn targetdir trainingdata loop 100 batchsize 1000000 eta 1.0 lambda 0.5 eval_limit 32000 nn_batch_size 1000 newbob_decay 0.5 eval_save_interval 10000000 loss_output_interval 1000000 mirror_percentage 50 validation_set_file_name validationdata\generated_kifu.bin
quit

Please make sure that the "test_cross_entropy" in the progress messages will be decreased.
If it is not decreased, the training will fail.  In that case, please adjust "eta", "nn_batch_size", or other parameters.
If test_cross_entropy is decreased enough, the traning will befinished.
Congrats!
If you want to save the trained NN parameter files into a specific folder, please set "EvalSaveDir" option.

We could repeat the "training data generation phase" and "training phase" again and again with the output NN evaluation functions in the previous iteration.
This is a kind of reinforcement learning.
After the first iteration, please use "stockfish.nnue-learn-use-blas.k-p_256x2-32-32.exe" to generate training data so that we use the output NN parameters in the previous iteration.
Also, please set "SkipLoadingEval" to false in the training phase so that the trainer loads the NN parameters in the previous iteration.

We also could change the network architecture.
The network architecuture in "stockfish-nnue-2020-05-30" is "k-p_256x2-32-32".
"k-p" means the input feature.
"k" means "king", the one-shot encoded position of a king.
"p" means "peace", the one-shot encoded position and type of a piece other than king.
"256x2-32-32" means the number of the channels in each hidden layer.
The number of the channels in the first hidden layer is "256x2".
The number of the channels in the second and the third is "32".

The standard network architecture in computer shogi is "halfkp_256x2-32-32".
"halfkp" means the direct product of "k" and "p" for each color.
If we use "halfkp_256x2-32-32", we could need more training data because the number of the network paramters is much larger than "k-p_256x2-32-32".
We could need 300,000,000 traning data for each iteration.
AndrewGrant wrote: Fri Nov 06, 2020 6:46 pm ...
I watched like, 4 youtube videos in order to implement back propagation in the NNTrainer repo you and Halogen author have access to. There is this ... endemic issue in the Computer Chess world where people don't actually even attempt to understand machine learning. With the exception of Arasan, I believe I'm responsible for everyone porting their texel tuners to Gradient Decent -- something which should have happened long before I was around. Multiple conversations with others working on NN's seems to show that most just don't have a damn clue. Its my estimation that those who do not understand how to build and train an NN don't deserve to be pasting them.
...
I 100 percent agree. People shouldn't have code in their engine they don't themselves understand and certainly shouldn't be copying SF code into their much weaker engine.
JohnWoe
Posts: 491
Joined: Sat Mar 02, 2013 11:31 pm

Re: Mayhem NNUE - New NN engine

Post by JohnWoe »

I've done a lot of work on my engine. Tons of bug fixes, speedups and whatnots...
The earlier version managed to even beat Fruit with massive search bug. :D

Mayhem needs lots of hash. At least 256MB to properly work. --bench should return at least 1,000,000 NPS. Then your cpu is strong enough.

Proudly presenting: Mayhem NNUE 0.50
Release: https://github.com/SamuraiDangyo/mayhem ... /tag/v0.50
Source code: https://github.com/SamuraiDangyo/mayhem

60/40 Blitz games vs CCRL 2952 Elo Crafty 25.6: To me, it seems like Mayhem is 3000+ Elo. :shock:
Ok too few games. But still excellent results!

Code: Select all

Score of Mayhem NNUE 0.50 vs Crafty-25.6: 37 - 22 - 13  [0.604] 72
Elo difference: 73.46 +/- 75.10
Finished match
(1200 lines) Mayhem versus 2700 Elo Fruit 2.1 !

Code: Select all

Score of Mayhem NNUE 0.50 vs Fruit 2.1: 7 - 2 - 1  [0.750] 10
Elo difference: 190.85 +/- 658.29
In this long grind. Crafty is 1 pawn up. But totally drawish. Somehow Crafty went to lose it. This proves Null move works perfectly on Mayhem.
[pgn][Event "?"]
[Site "?"]
[Date "2020.11.10"]
[Round "2"]
[White "Crafty-25.6"]
[Black "Mayhem NNUE 0.50"]
[Result "0-1"]
[TimeControl "60/40"]
[ECO "A00"]
[Opening "Dunst (Sleipner, Heinrichsen) Opening"]
[PlyCount "291"]
[Termination "adjudication"]
[Annotator "1. +0,19 1... +0,17"]

1. Nc3 {+0,19/17 5}
{s}
1... d5 {+0,17/11 1,3}
{s}
2. d4 {+0,09/17 1,1}
{s}
2... Nf6 {+0,18/10 1,3}
{s}
3. Nf3 {+0,17/16 8}
{s}
3... e6 {+0,24/10 1,2}
{s}
4. Bg5 {+0,18/16 5}
{s}
4... Be7 {+0,31/9 1,2}
{s}
5. e3 {+0,30/15 5}
{s}
5... c5 {+0,36/9 1,2}
{s}
6. dxc5 {+0,30/15 9}
{s}
6... Qa5 {+0,30/9 1,1}
{s}
7. a3 {+0,66/15 1,6}
{s}
7... Qxc5 {+0,07/9 1,1}
{s}
8. Be2 {+0,41/15 9}
{s}
8... a6 {+0,77/9 1,1}
{s}
9. Qd3 {+0,21/16 8}
{s}
9... Nc6 {+0,65/8 1,0}
{s}
10. b4 {+0,21/15 4}
{s}
10... Qd6 {+0,68/9 10}
{s}
11. O-O {+0,21/16 1,2}
{s}
11... O-O {+0,40/9 9}
{s}
12. Rfd1 {+0,21/14 6}
{s}
12... h6 {+0,37/8 9}
{s}
13. Bxf6 {+0,57/17 4}
{s}
13... Bxf6 {-0,18/10 9}
{s}
14. Ne4 {+0,57/18 4}
{s}
14... Qe7 {-0,87/11 9}
{s}
15. Nxf6+ {+0,52/18 4}
{s}
15... Qxf6 {-0,53/11 8}
{s}
16. c4 {+0,56/18 4}
{s}
16... dxc4 {-0,82/10 8}
{s}
17. Qxc4 {+0,62/17 5}
{s}
17... e5 {-0,50/9 8}
{s}
18. Qc5 {+0,65/15 6}
{s}
18... Be6 {-0,44/9 7}
{s}
19. b5 {+0,46/16 9}
{s}
19... axb5 {-0,02/9 7}
{s}
20. Bxb5 {+0,55/17 9}
{s}
20... Rfc8 {-0,17/10 7}
{s}
21. Bxc6 {+0,12/17 7}
{s}
21... Rxc6 {-0,28/10 7}
{s}
22. Qb5 {+0,39/16 5}
{s}
22... Bg4 {-0,01/9 6}
{s}
23. Qxe5 {+0,17/20 4}
{s}
23... Qxe5 {-0,31/10 6}
{s}
24. Nxe5 {+0,13/21 4}
{s}
24... Bxd1 {-0,25/12 6}
{s}
25. Nxc6 {+0,12/21 4}
{s}
25... bxc6 {-0,25/12 6}
{s}
26. Rxd1 {+0,16/20 4}
{s}
26... Rxa3 {-0,12/11 6}
{s}
27. Kf1 {+0,16/18 4}
{s}
27... c5 {-0,12/10 5}
{s}
28. Ke2 {+0,10/17 7}
{s}
28... Kf8 {-0,23/9 5}
{s}
29. Rd7 {+0,07/18 7}
{s}
29... c4 {-0,10/9 5}
{s}
30. Rc7 {+0,29/18 9}
{s}
30... c3 {+0,00/11 5}
{s}
31. g4 {+0,32/18 1,9}
{s}
31... Ra2+ {+0,08/10 5}
{s}
32. Kf3 {+0,19/18 4}
{s}
32... c2 {+0,06/11 5}
{s}
33. Ke4 {+0,17/19 8}
{s}
33... Ke8 {+0,02/11 5}
{s}
34. f3 {+0,78/17 4}
{s}
34... Kd8 {-0,01/11 5}
{s}
35. Rc5 {+0,64/17 7}
{s}
35... h5 {-0,13/11 5}
{s}
36. gxh5 {+1,41/18 4}
{s}
36... Ra4+ {-0,13/11 5}
{s}
37. Kd3 {+1,40/18 4}
{s}
37... Ra5 {-0,30/12 5}
{s}
38. Rxc2 {+1,62/15 4}
{s}
38... Rxh5 {-0,19/10 5}
{s}
39. e4 {+1,43/17 5}
{s}
39... g6 {-0,14/10 5}
{s}
40. Ke3 {+1,14/18 1,7}
{s}
40... Ke8 {-0,14/10 5}
{s}
41. Kf4 {+1,22/17 7}
{s}
41... Ke7 {-0,28/10 5}
{s}
42. Kg4 {+1,22/17 7}
{s}
42... f5+ {-0,19/11 5}
{s}
43. exf5 {+1,13/17 5}
{s}
43... Rxf5 {-0,19/11 5}
{s}
44. Rc7+ {+1,04/17 1,5}
{s}
44... Kf6 {-0,18/10 5}
{s}
45. Rc6+ {+1,01/18 3}
{s}
45... Kg7 {-0,17/12 5}
{s}
46. Rd6 {+0,98/20 8}
{s}
46... Ra5 {-0,16/10 5}
{s}
47. h4 {+0,98/20 7}
{s}
47... Ra4+ {-0,15/10 5}
{s}
48. Kg5 {+0,98/21 3}
{s}
48... Ra5+ {-0,09/11 5}
{s}
49. Kf4 {+0,98/23 3}
{s}
49... Ra4+ {-0,09/10 5}
{s}
50. Kg3 {+0,98/23 4}
{s}
50... Kh6 {-0,12/11 5}
{s}
51. Re6 {+0,98/21 4}
{s}
51... Ra1 {-0,09/11 5}
{s}
52. Re4 {+0,98/19 7}
{s}
52... Rg1+ {-0,08/10 5}
{s}
53. Kh3 {+0,93/20 8}
{s}
53... Rh1+ {-0,06/11 5}
{s}
54. Kg2 {+0,86/21 4}
{s}
54... Rc1 {-0,08/11 5}
{s}
55. Re5 {+0,75/20 8}
{s}
55... Rb1 {-0,09/11 5}
{s}
56. Rd5 {+0,98/22 4}
{s}
56... Ra1 {-0,06/11 5}
{s}
57. Rg5 {+0,86/20 9}
{s}
57... Ra4 {-0,07/11 5}
{s}
58. Rg4 {+0,88/21 4}
{s}
58... Ra1 {-0,05/11 5}
{s}
59. Kg3 {+0,86/24 5}
{s}
59... Rg1+ {-0,03/11 5}
{s}
60. Kh2 {-327,68/25 6}
{s}
60... Rf1 {-0,03/12 5}
{s}
61. Rg3 {+0,84/21 4}
{s}
61... Rf2+ {-0,01/14 1,3}
{s}
62. Kh3 {+0,84/23 4}
{s}
62... Rf1 {-0,01/13 1,3}
{s}
63. Kg2 {+0,77/22 6}
{s}
63... Ra1 {-0,03/13 1,2}
{s}
64. Kh2 {+0,75/21 10}
{s}
64... Rf1 {+0,00/14 1,2}
{s}
65. Kh3 {+0,70/22 4}
{s}
65... Rb1 {-0,05/13 1,2}
{s}
66. Rg2 {+0,62/21 4}
{s}
66... Rc1 {-0,07/12 1,1}
{s}
67. Kg4 {+0,61/20 4}
{s}
67... Ra1 {-0,08/13 1,1}
{s}
68. Rh2 {+0,61/22 8}
{s}
68... Rg1+ {+0,00/12 1,1}
{s}
69. Kf4 {+0,63/19 4}
{s}
69... Ra1 {-0,04/12 1,0}
{s}
70. Kg3 {+0,58/19 5}
{s}
70... Rg1+ {-0,05/12 10}
{s}
71. Kf2 {+0,41/20 9}
{s}
71... Rc1 {+0,00/12 9}
{s}
72. Rg2 {+0,13/19 9}
{s}
72... Kh5 {-0,04/11 9}
{s}
73. Rg5+ {+0,98/20 4}
{s}
73... Kh6 {-0,07/12 9}
{s}
74. Kg2 {+0,75/21 8}
{s}
74... Rc4 {-0,06/13 9}
{s}
75. Kg3 {+0,86/22 4}
{s}
75... Rc1 {-0,06/13 8}
{s}
76. Ra5 {+0,86/20 4}
{s}
76... Rg1+ {-0,06/12 8}
{s}
77. Kh2 {+0,75/23 4}
{s}
77... Rf1 {-0,05/13 8}
{s}
78. Kg2 {+0,75/22 4}
{s}
78... Rc1 {-0,05/12 7}
{s}
79. Rb5 {+0,75/23 4}
{s}
79... Re1 {-0,03/12 7}
{s}
80. Rc5 {+0,86/23 4}
{s}
80... Rb1 {-0,04/11 7}
{s}
81. Kf2 {+0,75/20 9}
{s}
81... Ra1 {-0,04/12 7}
{s}
82. Kg3 {+0,75/20 4}
{s}
82... Rg1+ {-0,04/12 6}
{s}
83. Kh2 {+0,00/21 3}
{s}
83... Rf1 {+0,00/12 6}
{s}
84. Kg2 {+0,00/26 4}
{s}
84... Rb1 {+0,00/15 6}
{s}
85. Re5 {+0,73/21 9}
{s}
85... Ra1 {-0,02/12 6}
{s}
86. Rd5 {+0,75/22 5}
{s}
86... Ra7 {-0,01/12 6}
{s}
87. Kg3 {+0,71/21 1,1}
{s}
87... Ra1 {-0,01/12 5}
{s}
88. Rd8 {+0,71/21 4}
{s}
88... Rg1+ {-0,01/12 5}
{s}
89. Kh2 {+0,71/21 5}
{s}
89... Rc1 {+0,00/12 5}
{s}
90. Rg8 {+0,71/22 4}
{s}
90... Ra1 {+0,00/12 5}
{s}
91. Kg2 {+0,71/20 4}
{s}
91... Kh7 {+0,00/12 5}
{s}
92. Rb8 {+0,51/20 1,4}
{s}
92... Ra3 {+0,00/10 5}
{s}
93. Rb1 {+0,71/21 7}
{s}
93... Kh6 {+0,00/13 5}
{s}
94. Kg3 {+0,68/22 4}
{s}
94... Rd3 {+0,00/11 5}
{s}
95. Kg4 {+0,68/22 4}
{s}
95... Rd2 {-0,10/11 5}
{s}
96. f4 {+0,67/23 4}
{s}
96... Rg2+ {-0,08/12 5}
{s}
97. Kf3 {+0,67/22 4}
{s}
97... Rc2 {-0,08/11 5}
{s}
98. Rg1 {+0,67/22 9}
{s}
98... Ra2 {-0,09/11 5}
{s}
99. Kg4 {+0,67/22 9}
{s}
99... Kg7 {-0,11/11 5}
{s}
100. Rd1 {+0,67/19 4}
{s}
100... Rg2+ {-0,08/11 5}
{s}
101. Kh3 {+0,59/19 4}
{s}
101... Rc2 {-0,08/10 5}
{s}
102. Rd6 {+0,49/22 1,3}
{s}
102... Rc1 {-0,05/10 5}
{s}
103. Kg2 {+0,49/22 4}
{s}
103... Rc7 {-0,10/9 5}
{s}
104. Rb6 {+0,49/22 7}
{s}
104... Rc3 {-0,06/10 5}
{s}
105. Re6 {+0,49/22 2,3}
{s}
105... Rd3 {-0,05/10 5}
{s}
106. Rc6 {+0,16/21 4}
{s}
106... Rd4 {-0,07/10 5}
{s}
107. Kf3 {+0,00/23 1,2}
{s}
107... Rd3+ {-0,11/10 5}
{s}
108. Kg4 {+0,00/22 6}
{s}
108... Rd1 {-0,07/11 5}
{s}
109. Re6 {+0,00/23 4}
{s}
109... Rg1+ {-0,02/11 5}
{s}
110. Kf3 {+0,00/24 4}
{s}
110... Rf1+ {-0,08/10 5}
{s}
111. Ke4 {+0,00/25 4}
{s}
111... Re1+ {+0,00/11 5}
{s}
112. Kd5 {+0,00/26 4}
{s}
112... Rd1+ {+0,00/10 5}
{s}
113. Kc6 {+0,00/23 4}
{s}
113... Rd4 {+0,08/11 5}
{s}
114. Re7+ {+0,00/22 4}
{s}
114... Kf6 {+0,17/12 5}
{s}
115. Rd7 {+0,00/24 5}
{s}
115... Rc4+ {+0,13/12 5}
{s}
116. Kd5 {+0,00/28 5}
{s}
116... Rxf4 {+0,08/12 5}
{s}
117. Rh7 {+0,00/30 6}
{s}
117... Ra4 {+0,13/12 5}
{s}
118. Kc5 {+0,00/24 7}
{s}
118... Ra8 {+0,15/10 5}
{s}
119. Rd7 {+0,00/23 8}
{s}
119... Rh8 {+0,11/10 5}
{s}
120. Rd6+ {+0,00/25 6}
{s}
120... Kf5 {+0,24/12 5}
{s}
121. Rd5+ {+0,00/26 4}
{s}
121... Kf4 {+0,28/14 1,3}
{s}
122. Rd4+ {+0,00/27 4}
{s}
122... Kg3 {+0,25/13 1,3}
{s}
123. Rd6 {+0,00/28 4}
{s}
123... Rg8 {+0,35/14 1,2}
{s}
124. Rd4 {+0,00/31 4}
{s}
124... Rc8+ {+0,60/13 1,2}
{s}
125. Kb6 {+0,00/27 4}
{s}
125... Kh3 {+0,25/13 1,2}
{s}
126. Re4 {+0,00/24 4}
{s}
126... Rc2 {+1,03/14 1,1}
{s}
127. Rd4 {-0,65/19 1,3}
{s}
127... Rg2 {+1,44/15 1,1}
{s}
128. Kc6 {-0,79/19 5}
{s}
128... Rg4 {+2,04/14 1,1}
{s}
129. Rd1 {-1,01/18 10}
{s}
129... Rxh4 {+2,23/14 1,0}
{s}
130. Rg1 {-1,17/18 1,3}
{s}
130... Rg4 {+2,98/14 10}
{s}
131. Rh1+ {-1,34/17 1,5}
{s}
131... Kg2 {+2,91/14 9}
{s}
132. Rh8 {-1,83/19 2,4}
{s}
132... g5 {+3,24/14 9}
{s}
133. Rg8 {-2,07/19 5}
{s}
133... Kf3 {+4,66/14 9}
{s}
134. Kc5 {-2,06/18 1,1}
{s}
134... Rg1 {+5,39/13 9}
{s}
135. Rf8+ {-2,20/17 4}
{s}
135... Kg3 {+5,14/13 8}
{s}
136. Rf6 {-2,67/16 1,3}
{s}
136... Rd1 {+6,41/13 8}
{s}
137. Kc4 {-3,93/20 1,3}
{s}
137... g4 {+7,35/13 8}
{s}
138. Kc3 {-4,08/21 6}
{s}
138... Kg2 {+7,63/13 7}
{s}
139. Rf8 {-6,52/21 1,5}
{s}
139... Rd6 {+7,99/12 7}
{s}
140. Kb4 {-6,58/24 7}
{s}
140... Rd3 {+7,99/12 7}
{s}
141. Kc4 {-6,52/20 5}
{s}
141... Rf3 {+7,58/14 7}
{s}
142. Rc8 {-3,47/17 1,2}
{s}
142... g3 {+8,19/12 7}
{s}
143. Kd5 {-6,48/17 5}
{s}
143... Rf4 {+8,54/12 6}
{s}
144. Rc6 {-6,68/21 7}
{s}
144... Kf3 {+8,32/12 6}
{s}
145. Rc3+ {-6,68/20 3}
{s}
145... Kg4 {+8,71/12 6}
{s}
146. Rc2
{-6.68/22 0.34s, Black wins by adjudication} 0-1[/pgn]

Versus Fruit 2.1. Nice attacking game with black:
[pgn][Event "?"]
[Site "?"]
[Date "2020.11.10"]
[Round "8"]
[White "Fruit 2.1"]
[Black "Mayhem NNUE 0.50"]
[Result "0-1"]
[TimeControl "60/40"]
[ECO "A06"]
[Opening "Reti Opening"]
[PlyCount "115"]
[Termination "adjudication"]
[Annotator "1. +0,23 1... -0,45"]

1. Nf3 {+0,23/11 8}
{s}
1... d5 {-0,45/11 1,3}
{s}
2. e3 {+0,23/11 1,2}
{s}
2... c5 {-0,31/11 1,3}
{s}
3. Bb5+ {+0,44/10 1,6}
{s}
3... Bd7 {+0,04/11 1,2}
{s}
4. Bxd7+ {+0,33/11 2,6}
{s}
4... Qxd7 {+0,06/12 1,2}
{s}
5. O-O {+0,45/11 1,1}
{s}
5... Nc6 {+0,33/11 1,2}
{s}
6. c4 {+0,20/10 1,4}
{s}
6... dxc4 {+0,47/11 1,1}
{s}
7. Na3 {+0,12/10 1,3}
{s}
7... g6 {+0,47/10 1,1}
{s}
8. Qa4 {+0,31/10 1,6}
{s}
8... Bg7 {+0,39/10 1,1}
{s}
9. Qxc4 {+0,27/10 1,2}
{s}
9... Rc8 {+0,73/10 1,0}
{s}
10. Qxc5 {+0,46/9 6}
{s}
10... Nd4 {+0,57/10 10}
{s}
11. Qxa7 {+0,63/10 8}
{s}
11... Nxf3+ {+1,63/10 9}
{s}
12. gxf3 {+0,23/10 5}
{s}
12... Nh6 {+1,16/10 9}
{s}
13. Qa5 {+0,23/10 6}
{s}
13... O-O {+1,40/10 9}
{s}
14. Qb4 {+0,05/9 8}
{s}
14... Rfd8 {+2,68/9 9}
{s}
15. Qe4 {+0,09/9 9}
{s}
15... e5 {+2,76/10 8}
{s}
16. Kh1 {+0,07/10 7}
{s}
16... f5 {+3,85/10 8}
{s}
17. Qb4 {+0,05/10 5}
{s}
17... e4 {+3,22/9 8}
{s}
18. fxe4 {+0,65/9 6}
{s}
18... Kh8 {+1,88/8 7}
{s}
19. exf5 {+1,14/8 6}
{s}
19... Qxf5 {+2,20/8 7}
{s}
20. f3 {+1,06/9 1,8}
{s}
20... Qh3 {+5,27/9 7}
{s}
21. Rf2 {+0,11/9 7}
{s}
21... Rf8 {+4,68/9 7}
{s}
22. Kg1 {+0,23/9 5}
{s}
22... Rxf3 {+6,77/9 6}
{s}
23. Rxf3 {-1,60/11 8}
{s}
23... Qxf3 {+6,64/9 6}
{s}
24. Qh4 {-2,10/8 2,8}
{s}
24... Qd1+ {+10,95/9 6}
{s}
25. Kg2
{-M14/8 0.42s}
25... Rxc1 {+13,48/10 6}
{s}
26. Qd8+ {-2,42/9 1,8}
{s}
26... Ng8 {+11,66/11 6}
{s}
27. Rxc1 {-2,09/10 2,5}
{s}
27... Qxc1 {+12,22/10 5}
{s}
28. Qd5 {-2,37/9 1,5}
{s}
28... Qxb2 {+11,53/10 5}
{s}
29. Nb5 {-2,28/9 2,1}
{s}
29... Nf6 {+11,94/9 5}
{s}
30. Qd8+ {+0,00/12 1,4}
{s}
30... Ng8 {+0,00/13 5}
{s}
31. Qd5 {+0,00/12 1,0}
{s}
31... Bf8 {+10,76/9 5}
{s}
32. Nc3 {-2,05/9 3}
{s}
32... h5 {+9,59/8 5}
{s}
33. Ne4 {-1,68/8 1,9}
{s}
33... Qg7 {+9,43/9 5}
{s}
34. Qe6 {-1,61/8 1,5}
{s}
34... Nh6 {+8,80/10 5}
{s}
35. a4 {-1,44/8 1,5}
{s}
35... Qf7 {+9,52/10 5}
{s}
36. Qxf7 {-1,63/9 1,9}
{s}
36... Nxf7 {+12,44/12 5}
{s}
37. d4 {-1,85/11 1,4}
{s}
37... Kg7 {+13,15/12 5}
{s}
38. h3 {-1,82/11 2,4}
{s}
38... Kh6 {+14,10/11 5}
{s}
39. Nc3 {-1,76/11 2,7}
{s}
39... Bb4 {+14,85/11 5}
{s}
40. Nd5 {-1,92/12 3}
{s}
40... Ba5 {+13,80/12 5}
{s}
41. Kf2 {-1,82/12 1,9}
{s}
41... Nd6 {+15,96/12 5}
{s}
42. Kf3 {-1,91/13 2,8}
{s}
42... Nc4 {+13,55/13 5}
{s}
43. e4 {-1,53/12 2,4}
{s}
43... Nb6 {+15,29/12 5}
{s}
44. Nf4 {-1,84/12 2,4}
{s}
44... g5 {+12,83/12 5}
{s}
45. Nd3 {-1,87/12 2,3}
{s}
45... Nxa4 {+15,72/11 5}
{s}
46. d5 {-2,17/11 1,5}
{s}
46... Bd8 {+17,11/11 5}
{s}
47. d6 {-2,16/11 2,5}
{s}
47... Kg6 {+16,69/11 5}
{s}
48. Kg3 {-2,36/11 2,5}
{s}
48... Kf6 {+17,01/11 5}
{s}
49. h4 {-3,13/11 2,9}
{s}
49... Ke6 {+19,04/12 5}
{s}
50. e5 {-3,59/12 2,8}
{s}
50... gxh4+ {+19,86/12 5}
{s}
51. Kf4 {-4,26/13 1,9}
{s}
51... Kd7 {+20,39/12 5}
{s}
52. Kf3 {-3,58/12 2,7}
{s}
52... b6 {+21,56/12 5}
{s}
53. Nf4 {-4,17/12 3}
{s}
53... Nc5 {+23,43/12 5}
{s}
54. Kg2 {-4,66/13 3}
{s}
54... b5 {+22,99/11 5}
{s}
55. Kf2 {-6,26/12 4}
{s}
55... b4 {+25,71/11 5}
{s}
56. Ke2 {-7,56/13 5}
{s}
56... b3 {+26,28/10 5}
{s}
57. Kd1 {-7,59/11 1,4}
{s}
57... Bg5 {+28,71/12 5}
{s}
58. Ne2
{-7.98/11 0.11s, Black wins by adjudication} 0-1

[/pgn]
User avatar
Sylwy
Posts: 4467
Joined: Fri Apr 21, 2006 4:19 pm
Location: IASI - the historical capital of MOLDOVA
Full name: SilvianR

Re: Mayhem NNUE - New NN engine

Post by Sylwy »

JohnWoe wrote: Tue Nov 10, 2020 9:38 pm --bench should return at least 1,000,000 NPS. Then your cpu is strong enough.
Good enough ? :lol: For a test, of course.

Image