Somebody under this user name dkappe on this Forum from Rio de Janeiro decided to experiment with Komodo and NNUE and is NOT bad at all very promising with the NNUE evaluation, it just need at least 1000 more games to train it properly
Alguém com esse nome de usuário dkappe neste fórum do Rio de Janeiro decidiu experimentar o Komodo e o NNUE e NÃO é nada promissor com a avaliação do NNUE, são necessários pelo menos mais 1000 jogos para treiná-lo adequadamente
Alguien con el Nombre de usuario de dkappe decidio experimentar con Komodo 14 usanda el NET NNUE y es muy prometedor pero necesita por lo menos 1000 juegos mas de entrenamiento
[quote=dkappe post_id=851931 time=1594843706 user_id=10713]
Just for fun I decided to train a NET with Komodo 14 evals at depth 8. Only 4 million positions. I was expecting something pretty weak, but it’s not half bad. Here with 30 threads vs sf10 (also with 30 threads). So far +3=9-0.
[pgn][Event "lizard1-sf10-11-254, Blitz 1.0min+1.0se"]
[Site "Rio de Janeiro, Brazil"]
[Date "2020.07.15"]
[Round "5.1"]
[White "Lizard-NNUE "]
[Black "Stockfish 10 64 POPCNT"]
[Result "1-0"]
[ECO "A38"]
[Annotator "0.10;-0.08"]
[PlyCount "115"]
[EventDate "2020.07.15"]
[EventType "tourn"]
[SourceTitle "Fritz Engine Tournament"]
[Source "Silver"]
{AMD Ryzen Threadripper 1950X 16-Core Processor 3394 MHz W=23.9 plies; 15,
037kN/s; 358,973 TBAs B=23.3 plies; 32,783kN/s; 1,070,588 TBAs} 1. Nf3 Nf6 2.
g3 g6 3. Bg2 Bg7 4. O-O O-O 5. d3 d6 6. c4 c5 {-0.08/22 2} 7. Nc3 {0.10/23 4}
Nc6 {-0.09/21 1} 8. a3 {0.12/25 4 (Rb1)} Qd7 {-0.08/21 2 (a6)} 9. Rb1 {0.22/20
2} b6 {-0.21/22 1} 10. b4 {0.20/23 3} Bb7 {-0.05/21 2} 11. Qa4 {0.19/24 2 (e3)}
e6 {0.02/20 4} 12. e3 {0.14/23 2 (Rd1)} Ng4 {0.02/20 4 (h6)} 13. Bb2 {0.23/21 2
} Qe7 {0.00/22 1} 14. Ne1 {0.13/23 3 (Nd2)} Rfc8 {-0.18/21 2 (Nce5)} 15. Qb3 {
0.22/22 3} Rab8 {-0.31/21 2 (f5)} 16. b5 {0.16/21 3} Na5 {-0.21/22 1} 17. Qc2 {
0.16/24 4} f5 {-0.02/22 4} 18. Rd1 {0.05/20 2 (Bxb7)} Rf8 {-0.29/20 3 (Re8)}
19. Bxb7 {0.00/25 3 (h3)} Qxb7 {-0.21/24 1} 20. h3 {0.09/23 1} Nf6 {-0.39/23 1}
21. Nb1 {0.00/23 1} Nh5 {-0.44/22 1 (Qd7)} 22. Bxg7 {0.00/27 2} Qxg7 {-0.26/23
3} 23. Nf3 {0.00/27 7 (Nc3)} g5 {-0.17/24 10} 24. Kg2 {0.00/25 4 (Rde1)} h6 {
-0.16/22 4 (Rbe8)} 25. Nc3 {0.00/24 3 (Rde1)} Rbd8 {-0.22/24 2 (Rf7)} 26. a4 {
0.00/22 1 (Rde1)} Nf6 {-0.17/25 4} 27. Rde1 {0.00/28 1} Qf7 {-0.20/23 1} 28.
Qe2 {0.00/25 2} Nd7 {-0.21/24 1} 29. Qc2 {0.00/27 1} Kh7 {-0.25/24 2 (Nf6)} 30.
Nd2 {0.10/22 3 (Ne2)} Rde8 {0.00/24 8 (Ne5)} 31. f4 {0.55/20 1} Qg6 {0.33/25 9}
32. e4 {0.74/23 2} gxf4 {0.51/26 6 (fxe4)} 33. Rxf4 {0.69/28 3} Rg8 {0.51/26 2
(fxe4)} 34. Re3 {0.63/24 4 (Ne2)} Ne5 {0.33/20 1} 35. Ne2 {0.89/23 1} Qh5 {
0.44/22 2} 36. exf5 {0.78/24 4} exf5 {0.31/23 1} 37. Rf2 {0.76/23 1} Qg6 {
0.62/26 6 (Qg5)} 38. Nf4 {1.34/25 4 (Nf1)} Qg5 {0.73/20 1} 39. Kf1 {1.43/23 1}
Nb7 {0.67/22 1} 40. Qc3 {1.28/28 8 (Nd5)} Rgf8 {0.79/24 4} 41. Nd5 {1.23/25 2}
Qh5 {0.88/23 1} 42. Kg2 {1.20/25 1} Qg6 {0.86/24 1} 43. Nf4 {1.33/23 1 (Nb3)}
Qf6 {1.03/25 1 (Qg7)} 44. Nf3 {1.27/26 4} Nd8 {0.97/25 0} 45. Qe1 {1.29/23 0}
Qg7 {1.14/23 1} 46. Kh2 {1.47/23 2} Ne6 {1.08/25 1 (Nxf3+)} 47. Nxe5 {1.53/21 0
} Nxf4 {1.03/25 0} 48. gxf4 {1.63/19 0} Rg8 {1.03/27 1 (dxe5)} 49. Rfe2 {
1.64/21 1} dxe5 {0.63/24 1} 50. Qf2 {1.29/23 1} e4 {0.67/26 1} 51. dxe4 {
1.86/23 1} Qf7 {0.61/25 0 (Rxe4)} 52. exf5 {1.78/23 1} Rxe3 {1.07/26 2} 53.
Rxe3 {2.61/24 1 (Qxe3)} Qxc4 {0.26/20 1 (Qxf5)} 54. f6 {3.63/22 1 (a5)} Qxa4 {
3.76/24 3 (Rf8)} 55. Re7+ {10.20/27 1} Kh8 {12.84/26 2} 56. f7 {11.83/25 1} Rf8
{16.69/24 1} 57. Qb2+ {12.30/27 1} Kh7 {21.79/24 1 (Qd4)} 58. Qb1+ {12.60/24 1}
1-0
[/pgn]
Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
Moderators: hgm, Rebel, chrisw
-
- Posts: 5603
- Joined: Wed Sep 05, 2018 2:16 am
- Location: Moving
- Full name: Jorge Picado
Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
Last edited by Chessqueen on Thu Jul 16, 2020 12:55 am, edited 2 times in total.
Who is 17 years old GM Gukesh 2nd at the Candidate in Toronto?
https://indianexpress.com/article/sport ... t-9281394/
https://indianexpress.com/article/sport ... t-9281394/
-
- Posts: 1632
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Lizard-NNUE Experiment NOT bad with NNUE Evaluation.........
A number of people verified for me that I wasn’t imagining things. This was run by a friend of mine. I only wish I lived in Rio.Chessqueen wrote: ↑Thu Jul 16, 2020 12:35 am Somebody under this user name dkappe on this Forum from Rio de Janeiro decided to experiment with Komodo and NNUE and is NOT bad at all very promising with the NNUE evaluation, it just need at least 1000 more games to train it properly
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 5603
- Joined: Wed Sep 05, 2018 2:16 am
- Location: Moving
- Full name: Jorge Picado
Re: Lizard-NNUE Experiment NOT bad with NNUE Evaluation.........
Ask your friend to Join this forum and provide more data about his experiment with Komodo-NNUE, he can write in Portuguese and that is fine we can translate it using https://translate.google.com/dkappe wrote: ↑Thu Jul 16, 2020 12:47 amA number of people verified for me that I wasn’t imagining things. This was run by a friend of mine. I only wish I lived in Rio.Chessqueen wrote: ↑Thu Jul 16, 2020 12:35 am Somebody under this user name dkappe on this Forum from Rio de Janeiro decided to experiment with Komodo and NNUE and is NOT bad at all very promising with the NNUE evaluation, it just need at least 1000 more games to train it properly
dkappe wrote: ↑Wed Jul 15, 2020 10:08 pm Just for fun I decided to train a NET with Komodo 14 evals at depth 8. Only 4 million positions. I was expecting something pretty weak, but it’s not half bad. Here with 30 threads vs sf10 (also with 30 threads). So far +3=9-0.
[pgn][Event "lizard1-sf10-11-254, Blitz 1.0min+1.0se"]
[Site "Rio de Janeiro, Brazil"]
[Date "2020.07.15"]
[Round "5.1"]
[White "Lizard-NNUE "]
[Black "Stockfish 10 64 POPCNT"]
[Result "1-0"]
[ECO "A38"]
[Annotator "0.10;-0.08"]
[PlyCount "115"]
[EventDate "2020.07.15"]
[EventType "tourn"]
[SourceTitle "Fritz Engine Tournament"]
[Source "Silver"]
{AMD Ryzen Threadripper 1950X 16-Core Processor 3394 MHz W=23.9 plies; 15,
037kN/s; 358,973 TBAs B=23.3 plies; 32,783kN/s; 1,070,588 TBAs} 1. Nf3 Nf6 2.
g3 g6 3. Bg2 Bg7 4. O-O O-O 5. d3 d6 6. c4 c5 {-0.08/22 2} 7. Nc3 {0.10/23 4}
Nc6 {-0.09/21 1} 8. a3 {0.12/25 4 (Rb1)} Qd7 {-0.08/21 2 (a6)} 9. Rb1 {0.22/20
2} b6 {-0.21/22 1} 10. b4 {0.20/23 3} Bb7 {-0.05/21 2} 11. Qa4 {0.19/24 2 (e3)}
e6 {0.02/20 4} 12. e3 {0.14/23 2 (Rd1)} Ng4 {0.02/20 4 (h6)} 13. Bb2 {0.23/21 2
} Qe7 {0.00/22 1} 14. Ne1 {0.13/23 3 (Nd2)} Rfc8 {-0.18/21 2 (Nce5)} 15. Qb3 {
0.22/22 3} Rab8 {-0.31/21 2 (f5)} 16. b5 {0.16/21 3} Na5 {-0.21/22 1} 17. Qc2 {
0.16/24 4} f5 {-0.02/22 4} 18. Rd1 {0.05/20 2 (Bxb7)} Rf8 {-0.29/20 3 (Re8)}
19. Bxb7 {0.00/25 3 (h3)} Qxb7 {-0.21/24 1} 20. h3 {0.09/23 1} Nf6 {-0.39/23 1}
21. Nb1 {0.00/23 1} Nh5 {-0.44/22 1 (Qd7)} 22. Bxg7 {0.00/27 2} Qxg7 {-0.26/23
3} 23. Nf3 {0.00/27 7 (Nc3)} g5 {-0.17/24 10} 24. Kg2 {0.00/25 4 (Rde1)} h6 {
-0.16/22 4 (Rbe8)} 25. Nc3 {0.00/24 3 (Rde1)} Rbd8 {-0.22/24 2 (Rf7)} 26. a4 {
0.00/22 1 (Rde1)} Nf6 {-0.17/25 4} 27. Rde1 {0.00/28 1} Qf7 {-0.20/23 1} 28.
Qe2 {0.00/25 2} Nd7 {-0.21/24 1} 29. Qc2 {0.00/27 1} Kh7 {-0.25/24 2 (Nf6)} 30.
Nd2 {0.10/22 3 (Ne2)} Rde8 {0.00/24 8 (Ne5)} 31. f4 {0.55/20 1} Qg6 {0.33/25 9}
32. e4 {0.74/23 2} gxf4 {0.51/26 6 (fxe4)} 33. Rxf4 {0.69/28 3} Rg8 {0.51/26 2
(fxe4)} 34. Re3 {0.63/24 4 (Ne2)} Ne5 {0.33/20 1} 35. Ne2 {0.89/23 1} Qh5 {
0.44/22 2} 36. exf5 {0.78/24 4} exf5 {0.31/23 1} 37. Rf2 {0.76/23 1} Qg6 {
0.62/26 6 (Qg5)} 38. Nf4 {1.34/25 4 (Nf1)} Qg5 {0.73/20 1} 39. Kf1 {1.43/23 1}
Nb7 {0.67/22 1} 40. Qc3 {1.28/28 8 (Nd5)} Rgf8 {0.79/24 4} 41. Nd5 {1.23/25 2}
Qh5 {0.88/23 1} 42. Kg2 {1.20/25 1} Qg6 {0.86/24 1} 43. Nf4 {1.33/23 1 (Nb3)}
Qf6 {1.03/25 1 (Qg7)} 44. Nf3 {1.27/26 4} Nd8 {0.97/25 0} 45. Qe1 {1.29/23 0}
Qg7 {1.14/23 1} 46. Kh2 {1.47/23 2} Ne6 {1.08/25 1 (Nxf3+)} 47. Nxe5 {1.53/21 0
} Nxf4 {1.03/25 0} 48. gxf4 {1.63/19 0} Rg8 {1.03/27 1 (dxe5)} 49. Rfe2 {
1.64/21 1} dxe5 {0.63/24 1} 50. Qf2 {1.29/23 1} e4 {0.67/26 1} 51. dxe4 {
1.86/23 1} Qf7 {0.61/25 0 (Rxe4)} 52. exf5 {1.78/23 1} Rxe3 {1.07/26 2} 53.
Rxe3 {2.61/24 1 (Qxe3)} Qxc4 {0.26/20 1 (Qxf5)} 54. f6 {3.63/22 1 (a5)} Qxa4 {
3.76/24 3 (Rf8)} 55. Re7+ {10.20/27 1} Kh8 {12.84/26 2} 56. f7 {11.83/25 1} Rf8
{16.69/24 1} 57. Qb2+ {12.30/27 1} Kh7 {21.79/24 1 (Qd4)} 58. Qb1+ {12.60/24 1}
1-0
[/pgn]
Who is 17 years old GM Gukesh 2nd at the Candidate in Toronto?
https://indianexpress.com/article/sport ... t-9281394/
https://indianexpress.com/article/sport ... t-9281394/
-
- Posts: 1632
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Lizard-NNUE Experiment NOT bad with NNUE Evaluation.........
You misunderstand. I trained the net. He ran it on his machine to make sure I wasn’t imagining things.Chessqueen wrote: ↑Thu Jul 16, 2020 1:03 am
Ask your friend to Join this forum and provide more data about his experiment with Komodo-NNUE, he can write in Portuguese and that is fine we can translate it using https://translate.google.com/
Here another game, this time running on one of my home machines on only 1 thread.
[pgn]
[Event "Match"]
[Site "Chicago"]
[Date "2020.07.15"]
[Round "12"]
[White "LizardFish1"]
[Black "sfdev150720"]
[Result "1-0"]
[ECO "B28"]
[GameDuration "00:03:44"]
[GameEndTime "2020-07-15T17:27:56.345 CDT"]
[GameStartTime "2020-07-15T17:24:12.179 CDT"]
[Opening "Sicilian"]
[PlyCount "131"]
[TimeControl "60+1"]
[Variation "O'Kelly Variation"]
1. e4 {book} c5 {book} 2. Nf3 {book} a6 {book} 3. c3 {book} e6 {book}
4. d4 {+0.44/19 2.0s} d5 {-0.29/21 4.3s} 5. e5 {+0.55/20 2.1s}
Nc6 {+0.05/18 0.76s} 6. Bd3 {+0.46/19 2.4s} Nge7 {0.00/20 3.2s}
7. O-O {+0.55/17 1.2s} Nf5 {-0.01/20 0.91s} 8. dxc5 {+0.47/20 3.5s}
Bxc5 {-0.43/21 4.8s} 9. Re1 {+0.38/18 1.6s} O-O {-0.33/19 4.1s}
10. Nbd2 {+0.33/17 1.5s} h6 {-0.47/24 6.5s} 11. b4 {+0.55/21 5.8s}
Ba7 {-0.13/17 0.75s} 12. Bb2 {+0.52/18 1.1s} Bd7 {-0.29/22 5.5s}
13. a3 {+0.17/25 17s} b5 {-0.01/24 4.0s} 14. Bb1 {+0.34/19 3.2s}
Nh4 {0.00/19 1.8s} 15. Qc2 {+0.47/18 1.0s} g6 {-0.01/22 4.4s}
16. a4 {+0.47/22 5.1s} bxa4 {-0.17/24 7.3s} 17. Nxh4 {+0.35/22 0.94s}
Qxh4 {-0.66/24 1.8s} 18. Nf3 {+0.78/20 0.99s} Qh5 {-0.32/22 1.1s}
19. Rxa4 {+0.95/20 1.2s} Ne7 {-0.57/24 2.6s} 20. Ra3 {+0.70/21 1.8s}
Bb5 {-0.51/22 1.7s} 21. Bc1 {+0.54/24 9.7s} Bb6 {-0.44/24 5.2s}
22. Bf4 {+0.79/17 0.79s} Rfc8 {-0.37/22 3.0s} 23. Qd2 {+0.56/21 1.9s}
g5 {-0.31/20 1.1s} 24. Be3 {+0.63/22 1.1s} Bxe3 {-0.51/23 1.4s}
25. Rxe3 {+0.50/22 1.9s} g4 {-0.28/21 1.3s} 26. Nd4 {+0.37/22 1.4s}
Qg5 {-0.31/24 2.9s} 27. h4 {+0.55/21 0.96s} Qxh4 {0.00/22 0.74s}
28. Rg3 {+1.50/21 1.2s} Qg5 {0.00/27 1.5s} 29. f4 {+0.40/23 3.2s}
Qh4 {-0.25/21 1.4s} 30. Qf2 {+0.52/23 1.3s} Ng6 {-0.67/26 7.5s}
31. Bxg6 {+1.36/22 4.3s} fxg6 {-1.26/23 1.2s} 32. Nxe6 {+1.37/19 0.58s}
h5 {-1.22/22 2.0s} 33. Ra1 {+1.34/20 2.3s} Rc4 {-1.24/19 1.0s}
34. Nd4 {+1.35/18 0.55s} Rf8 {-1.25/19 0.65s} 35. e6 {+1.68/17 0.77s}
Rc7 {-0.38/20 2.8s} 36. Re1 {+1.90/21 1.7s} Re7 {-1.63/19 0.62s}
37. f5 {+1.98/17 0.52s} Bc4 {-1.96/20 0.74s} 38. Qf4 {+2.21/20 2.8s}
Bd3 {-0.64/20 1.4s} 39. Re5 {+2.74/18 0.47s} Be4 {-1.03/18 0.24s}
40. c4 {+4.09/18 0.63s} Rb7 {-3.66/21 2.6s} 41. cxd5 {+4.17/17 0.58s}
Bxd5 {-3.59/18 0.16s} 42. Rxd5 {+3.81/17 0.98s} Rxb4 {-2.37/16 0.26s}
43. Kf2 {+4.37/16 1.1s} Rb2+ {-5.27/17 3.1s} 44. Ne2 {+5.12/16 0.78s}
Qe7 {-4.59/17 0.48s} 45. Qh6 {+5.04/18 1.0s} Qg7 {-4.67/18 0.38s}
46. Qxg7+ {+5.76/18 0.77s} Kxg7 {-4.67/9 0s} 47. Re3 {+5.88/19 0.69s}
gxf5 {-5.84/19 1.9s} 48. e7 {+6.13/17 0.71s} Re8 {-3.97/15 0.27s}
49. Kg3 {+6.45/20 1.2s} Rb5 {-4.13/17 0.55s} 50. Rd7 {+6.66/17 0.75s}
Rc5 {-6.41/21 3.3s} 51. Nd4 {+8.06/20 0.97s} Rb5 {-8.74/18 1.5s}
52. Kf4 {+10.34/19 0.79s} Rb4 {-8.67/16 0.34s} 53. Kg5 {+11.86/20 0.75s}
Kf7 {-16.94/18 1.6s} 54. Nxf5 {+12.45/17 1.3s} Rb5 {-9.63/17 0.61s}
55. Rd6 {+12.86/16 0.72s} Rg8+ {-6.56/16 0.21s} 56. Kf4 {+12.73/16 1.7s}
Re8 {-23.46/17 1.3s} 57. Rd8 {+13.24/15 0.76s} Kf6 {-6.73/16 1.2s}
58. Rxe8 {+M17/26 0.76s} Rxf5+ {-M16/25 0.41s} 59. Kg3 {+M15/36 0.72s}
h4+ {-M14/28 0.22s} 60. Kxh4 {+M13/35 0.71s} Re5 {-M12/41 0.28s}
61. Rxe5 {+M11/51 0.74s} Kxe5 {-M10/59 0.34s} 62. Rd8 {+M9/80 0.77s}
Kf6 {-M8/201 0.39s} 63. e8=Q {+M7/245 0.69s} g3 {-M6/245 0.16s}
64. Rd6+ {+M5/245 0.13s} Kg7 {-M4/245 0.008s} 65. Qe7+ {+M3/245 0.012s}
Kh8 {-M2/245 0.003s} 66. Rd8# {+M1/245 0.008s, White mates} 1-0
[/pgn]
It’s running about even with the latest sfdev, but the score, nps and pv are very different, so it’s not accidental sf eval.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 5603
- Joined: Wed Sep 05, 2018 2:16 am
- Location: Moving
- Full name: Jorge Picado
Re: Lizard-NNUE Experiment NOT bad with NNUE Evaluation.........
I thought that you used Komdo 14 with the NNUE Net, if that is the case and you did NOT used the Stockfish evaluation WHY do you call it LizardFish and NOT Lizard-NNUEdkappe wrote: ↑Thu Jul 16, 2020 1:15 amYou misunderstand. I trained the net. He ran it on his machine to make sure I wasn’t imagining things.Chessqueen wrote: ↑Thu Jul 16, 2020 1:03 am
Ask your friend to Join this forum and provide more data about his experiment with Komodo-NNUE, he can write in Portuguese and that is fine we can translate it using https://translate.google.com/
Here another game, this time running on one of my home machines on only 1 thread.
[pgn]
[Event "Match"]
[Site "Chicago"]
[Date "2020.07.15"]
[Round "12"]
[White "LizardFish1"]
[Black "sfdev150720"]
[Result "1-0"]
[ECO "B28"]
[GameDuration "00:03:44"]
[GameEndTime "2020-07-15T17:27:56.345 CDT"]
[GameStartTime "2020-07-15T17:24:12.179 CDT"]
[Opening "Sicilian"]
[PlyCount "131"]
[TimeControl "60+1"]
[Variation "O'Kelly Variation"]
1. e4 {book} c5 {book} 2. Nf3 {book} a6 {book} 3. c3 {book} e6 {book}
4. d4 {+0.44/19 2.0s} d5 {-0.29/21 4.3s} 5. e5 {+0.55/20 2.1s}
Nc6 {+0.05/18 0.76s} 6. Bd3 {+0.46/19 2.4s} Nge7 {0.00/20 3.2s}
7. O-O {+0.55/17 1.2s} Nf5 {-0.01/20 0.91s} 8. dxc5 {+0.47/20 3.5s}
Bxc5 {-0.43/21 4.8s} 9. Re1 {+0.38/18 1.6s} O-O {-0.33/19 4.1s}
10. Nbd2 {+0.33/17 1.5s} h6 {-0.47/24 6.5s} 11. b4 {+0.55/21 5.8s}
Ba7 {-0.13/17 0.75s} 12. Bb2 {+0.52/18 1.1s} Bd7 {-0.29/22 5.5s}
13. a3 {+0.17/25 17s} b5 {-0.01/24 4.0s} 14. Bb1 {+0.34/19 3.2s}
Nh4 {0.00/19 1.8s} 15. Qc2 {+0.47/18 1.0s} g6 {-0.01/22 4.4s}
16. a4 {+0.47/22 5.1s} bxa4 {-0.17/24 7.3s} 17. Nxh4 {+0.35/22 0.94s}
Qxh4 {-0.66/24 1.8s} 18. Nf3 {+0.78/20 0.99s} Qh5 {-0.32/22 1.1s}
19. Rxa4 {+0.95/20 1.2s} Ne7 {-0.57/24 2.6s} 20. Ra3 {+0.70/21 1.8s}
Bb5 {-0.51/22 1.7s} 21. Bc1 {+0.54/24 9.7s} Bb6 {-0.44/24 5.2s}
22. Bf4 {+0.79/17 0.79s} Rfc8 {-0.37/22 3.0s} 23. Qd2 {+0.56/21 1.9s}
g5 {-0.31/20 1.1s} 24. Be3 {+0.63/22 1.1s} Bxe3 {-0.51/23 1.4s}
25. Rxe3 {+0.50/22 1.9s} g4 {-0.28/21 1.3s} 26. Nd4 {+0.37/22 1.4s}
Qg5 {-0.31/24 2.9s} 27. h4 {+0.55/21 0.96s} Qxh4 {0.00/22 0.74s}
28. Rg3 {+1.50/21 1.2s} Qg5 {0.00/27 1.5s} 29. f4 {+0.40/23 3.2s}
Qh4 {-0.25/21 1.4s} 30. Qf2 {+0.52/23 1.3s} Ng6 {-0.67/26 7.5s}
31. Bxg6 {+1.36/22 4.3s} fxg6 {-1.26/23 1.2s} 32. Nxe6 {+1.37/19 0.58s}
h5 {-1.22/22 2.0s} 33. Ra1 {+1.34/20 2.3s} Rc4 {-1.24/19 1.0s}
34. Nd4 {+1.35/18 0.55s} Rf8 {-1.25/19 0.65s} 35. e6 {+1.68/17 0.77s}
Rc7 {-0.38/20 2.8s} 36. Re1 {+1.90/21 1.7s} Re7 {-1.63/19 0.62s}
37. f5 {+1.98/17 0.52s} Bc4 {-1.96/20 0.74s} 38. Qf4 {+2.21/20 2.8s}
Bd3 {-0.64/20 1.4s} 39. Re5 {+2.74/18 0.47s} Be4 {-1.03/18 0.24s}
40. c4 {+4.09/18 0.63s} Rb7 {-3.66/21 2.6s} 41. cxd5 {+4.17/17 0.58s}
Bxd5 {-3.59/18 0.16s} 42. Rxd5 {+3.81/17 0.98s} Rxb4 {-2.37/16 0.26s}
43. Kf2 {+4.37/16 1.1s} Rb2+ {-5.27/17 3.1s} 44. Ne2 {+5.12/16 0.78s}
Qe7 {-4.59/17 0.48s} 45. Qh6 {+5.04/18 1.0s} Qg7 {-4.67/18 0.38s}
46. Qxg7+ {+5.76/18 0.77s} Kxg7 {-4.67/9 0s} 47. Re3 {+5.88/19 0.69s}
gxf5 {-5.84/19 1.9s} 48. e7 {+6.13/17 0.71s} Re8 {-3.97/15 0.27s}
49. Kg3 {+6.45/20 1.2s} Rb5 {-4.13/17 0.55s} 50. Rd7 {+6.66/17 0.75s}
Rc5 {-6.41/21 3.3s} 51. Nd4 {+8.06/20 0.97s} Rb5 {-8.74/18 1.5s}
52. Kf4 {+10.34/19 0.79s} Rb4 {-8.67/16 0.34s} 53. Kg5 {+11.86/20 0.75s}
Kf7 {-16.94/18 1.6s} 54. Nxf5 {+12.45/17 1.3s} Rb5 {-9.63/17 0.61s}
55. Rd6 {+12.86/16 0.72s} Rg8+ {-6.56/16 0.21s} 56. Kf4 {+12.73/16 1.7s}
Re8 {-23.46/17 1.3s} 57. Rd8 {+13.24/15 0.76s} Kf6 {-6.73/16 1.2s}
58. Rxe8 {+M17/26 0.76s} Rxf5+ {-M16/25 0.41s} 59. Kg3 {+M15/36 0.72s}
h4+ {-M14/28 0.22s} 60. Kxh4 {+M13/35 0.71s} Re5 {-M12/41 0.28s}
61. Rxe5 {+M11/51 0.74s} Kxe5 {-M10/59 0.34s} 62. Rd8 {+M9/80 0.77s}
Kf6 {-M8/201 0.39s} 63. e8=Q {+M7/245 0.69s} g3 {-M6/245 0.16s}
64. Rd6+ {+M5/245 0.13s} Kg7 {-M4/245 0.008s} 65. Qe7+ {+M3/245 0.012s}
Kh8 {-M2/245 0.003s} 66. Rd8# {+M1/245 0.008s, White mates} 1-0
[/pgn]
It’s running about even with the latest sfdev, but the score, nps and pv are very different, so it’s not accidental sf eval.
Who is 17 years old GM Gukesh 2nd at the Candidate in Toronto?
https://indianexpress.com/article/sport ... t-9281394/
https://indianexpress.com/article/sport ... t-9281394/
-
- Posts: 1632
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 3019
- Joined: Wed Mar 08, 2006 9:57 pm
- Location: Rio de Janeiro, Brazil
Re: Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
It really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!dkappe wrote: ↑Thu Jul 16, 2020 2:10 am I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
Here is the result:
Code: Select all
lizard1-sf10-11-254, Blitz 1.0min+1.0se 2020
12345678901234567890123456789012345
1 SF NNUE halfkp-256 090720,x64 avx2 +113 ½½½½1½½½1½½1½½1½½½1½½½1111½01½1½½½1 23.0/35
2 Stockfish 10 64 POPCNT -113 ½½½½0½½½0½½0½½0½½½0½½½0000½10½0½½½0 12.0/35
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
-
- Posts: 1494
- Joined: Thu Mar 30, 2006 2:08 pm
Re: Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
I consider Albert a good friend, but I must disagree a bit. Training a NN to match the eval and search output of a single program seems to be to be a way to clone that program. We might not understand exactly how the NN works compared with say an assembly dump of a programs eval and search functions, but it is a direct attempt to duplicate the program. Training on many sources (programs, human games, self play) is not trying to specifically duplicate another programs search and eval, so I think that wold be allowed. Training for personal use is fine. I am just speaking of training against a program (especially a commercial engine) and then releasing the NN without permission is wrong. I assume testing groups and tournaments would agree, but I would like to hear more opinions.Albert Silver wrote: ↑Thu Jul 16, 2020 2:54 amIt really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!dkappe wrote: ↑Thu Jul 16, 2020 2:10 am I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
Here is the result:
Code: Select all
lizard1-sf10-11-254, Blitz 1.0min+1.0se 2020 12345678901234567890123456789012345 1 SF NNUE halfkp-256 090720,x64 avx2 +113 ½½½½1½½½1½½1½½1½½½1½½½1111½01½1½½½1 23.0/35 2 Stockfish 10 64 POPCNT -113 ½½½½0½½½0½½0½½0½½½0½½½0000½10½0½½½0 12.0/35
This is a new world, but the old cloning rules would still apply.
Mark
-
- Posts: 5603
- Joined: Wed Sep 05, 2018 2:16 am
- Location: Moving
- Full name: Jorge Picado
Re: Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
Well as long as Larry Kaufman agree to take advantage of the NNUE NET, and the advantages of using NNUE NET benefit Komodo to the point that it becomes stronger than Komodo, that would NOT be any different than using StockfiNN to advance Stockfish search with a more efficient onemjlef wrote: ↑Thu Jul 16, 2020 4:38 amI consider Albert a good friend, but I must disagree a bit. Training a NN to match the eval and search output of a single program seems to be to be a way to clone that program. We might not understand exactly how the NN works compared with say an assembly dump of a programs eval and search functions, but it is a direct attempt to duplicate the program. Training on many sources (programs, human games, self play) is not trying to specifically duplicate another programs search and eval, so I think that wold be allowed. Training for personal use is fine. I am just speaking of training against a program (especially a commercial engine) and then releasing the NN without permission is wrong. I assume testing groups and tournaments would agree, but I would like to hear more opinions.Albert Silver wrote: ↑Thu Jul 16, 2020 2:54 amIt really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!dkappe wrote: ↑Thu Jul 16, 2020 2:10 am I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
Here is the result:
Code: Select all
lizard1-sf10-11-254, Blitz 1.0min+1.0se 2020 12345678901234567890123456789012345 1 SF NNUE halfkp-256 090720,x64 avx2 +113 ½½½½1½½½1½½1½½1½½½1½½½1111½01½1½½½1 23.0/35 2 Stockfish 10 64 POPCNT -113 ½½½½0½½½0½½0½½0½½½0½½½0000½10½0½½½0 12.0/35
This is a new world, but the old cloning rules would still apply.
Mark
Who is 17 years old GM Gukesh 2nd at the Candidate in Toronto?
https://indianexpress.com/article/sport ... t-9281394/
https://indianexpress.com/article/sport ... t-9281394/
-
- Posts: 1494
- Joined: Thu Mar 30, 2006 2:08 pm
Re: Lizard-NNUE Experiment NOT bad with NNUE Net Evaluation.........
I.agree. it is fine to use your own program to make a better version of itself, just like in tuning a program.Chessqueen wrote: ↑Thu Jul 16, 2020 4:53 amWell as long as Larry Kaufman agree to take advantage of the NNUE NET, and the advantages of using NNUE NET benefit Komodo to the point that it becomes stronger than Komodo, that would NOT be any different than using StockfiNN to advance Stockfish search with a more efficient onemjlef wrote: ↑Thu Jul 16, 2020 4:38 amI consider Albert a good friend, but I must disagree a bit. Training a NN to match the eval and search output of a single program seems to be to be a way to clone that program. We might not understand exactly how the NN works compared with say an assembly dump of a programs eval and search functions, but it is a direct attempt to duplicate the program. Training on many sources (programs, human games, self play) is not trying to specifically duplicate another programs search and eval, so I think that wold be allowed. Training for personal use is fine. I am just speaking of training against a program (especially a commercial engine) and then releasing the NN without permission is wrong. I assume testing groups and tournaments would agree, but I would like to hear more opinions.Albert Silver wrote: ↑Thu Jul 16, 2020 2:54 amIt really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!dkappe wrote: ↑Thu Jul 16, 2020 2:10 am I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
Here is the result:
Code: Select all
lizard1-sf10-11-254, Blitz 1.0min+1.0se 2020 12345678901234567890123456789012345 1 SF NNUE halfkp-256 090720,x64 avx2 +113 ½½½½1½½½1½½1½½1½½½1½½½1111½01½1½½½1 23.0/35 2 Stockfish 10 64 POPCNT -113 ½½½½0½½½0½½0½½0½½½0½½½0000½10½0½½½0 12.0/35
This is a new world, but the old cloning rules would still apply.
Mark