Using an unbalanced opening book built from 2moves_v1 to decrease the draw rate, I am getting about 150 Elo points advantage of the best SV nets and latest SF compile over SF_dev at 15s + 0.25s time control.
Complete destruction of SF_dev, Win/Loss ratio of more than 3.
Using an unbalanced opening book built from 2moves_v1 to decrease the draw rate, I am getting about 150 Elo points advantage of the best SV nets and latest SF compile over SF_dev at 15s + 0.25s time control.
Complete destruction of SF_dev, Win/Loss ratio of more than 3.
Games Completed = 100 of 100 (Avg game length = 61.968 sec)
Settings = Gauntlet/128MB/15000ms+250ms/M 700cp for 3 moves, D 120 moves/EPD:C:\LittleBlitzer\2moves_80_100.epd(1749)
Time = 1593 sec elapsed, 0 sec remaining
1. SFNNUE 0633 SF binary 70.0/100 57-17-26 (L: m=0 t=0 i=0 a=17) (D: r=13 i=4 f=1 s=0 a=8) (tpm=443.3 d=19.28 nps=1128817)
2. SF_dev 30.0/100 17-57-26 (L: m=0 t=0 i=0 a=57) (D: r=13 i=4 f=1 s=0 a=8) (tpm=447.8 d=20.01 nps=1616564)
This is definitely beyond my expectations.
I had never expected this from training with fixed depth generated games.
Jörg,
Will you be maintaining your Stockfish-NNUE fork, now that NNUE has been merged into Stockfish?
It would be nice to have a "true" Stockfish-NNUE---one that allows the user to always use neural net evaluation. Rather than using some hybrid of classical and NNUE, which is the path that Stockfish has already taken.
M ANSARI wrote: ↑Fri Aug 07, 2020 7:32 am
This makes sense but I don't think this is the optimal way to go about this. Of course in quiet positions 50% less nps is not as critical as in the endgame, but I believe even in the endgame the NN evaluation can have a better evaluation in many positions.
They could yes, but clearly overall it's better not to use NNUE in decided positions or the change wouldn't have passed.
It's the same thing for people asking to have an always-NNUE-on switch or engine: Stockfish developers in general don't like adding switches that make the engine weaker for no reason whatsoever.
Using an unbalanced opening book built from 2moves_v1 to decrease the draw rate, I am getting about 150 Elo points advantage of the best SV nets and latest SF compile over SF_dev at 15s + 0.25s time control.
Complete destruction of SF_dev, Win/Loss ratio of more than 3.
Games Completed = 100 of 100 (Avg game length = 61.968 sec)
Settings = Gauntlet/128MB/15000ms+250ms/M 700cp for 3 moves, D 120 moves/EPD:C:\LittleBlitzer\2moves_80_100.epd(1749)
Time = 1593 sec elapsed, 0 sec remaining
1. SFNNUE 0633 SF binary 70.0/100 57-17-26 (L: m=0 t=0 i=0 a=17) (D: r=13 i=4 f=1 s=0 a=8) (tpm=443.3 d=19.28 nps=1128817)
2. SF_dev 30.0/100 17-57-26 (L: m=0 t=0 i=0 a=57) (D: r=13 i=4 f=1 s=0 a=8) (tpm=447.8 d=20.01 nps=1616564)
This is definitely beyond my expectations.
I had never expected this from training with fixed depth generated games.
Jörg,
Will you be maintaining your Stockfish-NNUE fork, now that NNUE has been merged into Stockfish?
It would be nice to have a "true" Stockfish-NNUE---one that allows the user to always use neural net evaluation. Rather than using some hybrid of classical and NNUE, which is the path that Stockfish has already taken.
Louis
No, I don't think so, Louis.
This was mainly meant for doing some cleanup and a lesson in cloning with git.
Now after the merge into official master, I can do whatever I want to try from my main Stockfish repository.
Using an unbalanced opening book built from 2moves_v1 to decrease the draw rate, I am getting about 150 Elo points advantage of the best SV nets and latest SF compile over SF_dev at 15s + 0.25s time control.
Complete destruction of SF_dev, Win/Loss ratio of more than 3.
Games Completed = 100 of 100 (Avg game length = 61.968 sec)
Settings = Gauntlet/128MB/15000ms+250ms/M 700cp for 3 moves, D 120 moves/EPD:C:\LittleBlitzer\2moves_80_100.epd(1749)
Time = 1593 sec elapsed, 0 sec remaining
1. SFNNUE 0633 SF binary 70.0/100 57-17-26 (L: m=0 t=0 i=0 a=17) (D: r=13 i=4 f=1 s=0 a=8) (tpm=443.3 d=19.28 nps=1128817)
2. SF_dev 30.0/100 17-57-26 (L: m=0 t=0 i=0 a=57) (D: r=13 i=4 f=1 s=0 a=8) (tpm=447.8 d=20.01 nps=1616564)
This is definitely beyond my expectations.
I had never expected this from training with fixed depth generated games.
Jörg,
Will you be maintaining your Stockfish-NNUE fork, now that NNUE has been merged into Stockfish?
It would be nice to have a "true" Stockfish-NNUE---one that allows the user to always use neural net evaluation. Rather than using some hybrid of classical and NNUE, which is the path that Stockfish has already taken.
Louis
No, I don't think so, Louis.
This was mainly meant for doing some cleanup and a lesson in cloning with git.
Now after the merge into official master, I can do whatever I want to try from my main Stockfish repository.
OK, Jörg. I understand that official Stockfish development is driven by maximizing self-play Elo at Fishtest time controls, which doesn't really interest me. I'd like to have an ongoing version of Stockfish that allows the user to abandon the classical evaluation entirely. Don't think I have the time or the skill set to maintain such a thing.
Leto wrote: ↑Fri Aug 07, 2020 12:19 am
A new patch has been released, it appears that Stockfish is now a hybrid engine, it plays as Stockfish NNUE during quiet balanced material positions, and as Stockfish during all other positions.
Two of the parameters added were tempo and lazy cutoff, both proven Elo gainers to any AB engine. and, in fact , they added about 15 Elo right off the bat to the NN engine. They were very simple patches.
"The idea is to use NNUE only on quite balanced material positions."
This was based on the fact that SF-NNUE searches much more slowly than classic SF on most hardware. But if one has fast hardware, so that SF-NNUE runs fast enough, then why not always use SF-NNUE?
cma6 wrote: ↑Fri Aug 07, 2020 4:08 pm
"The idea is to use NNUE only on quite balanced material positions."
This was based on the fact that SF-NNUE searches much more slowly than classic SF on most hardware. But if one has fast hardware, so that SF-NNUE runs fast enough, then why not always use SF-NNUE?
Because development of Stockfish is designed to maximize self-play Elo at Fishtest time controls. Which is fine, but which will occasionally leave some users (like me) unhappy. One can always fork the project and do what one wishes ...
cma6 wrote: ↑Fri Aug 07, 2020 4:08 pm
"The idea is to use NNUE only on quite balanced material positions."
This was based on the fact that SF-NNUE searches much more slowly than classic SF on most hardware. But if one has fast hardware, so that SF-NNUE runs fast enough, then why not always use SF-NNUE?
How is stockfish defining 'quite balanced material positions', thanks.
cma6 wrote: ↑Fri Aug 07, 2020 4:08 pm
"The idea is to use NNUE only on quite balanced material positions."
This was based on the fact that SF-NNUE searches much more slowly than classic SF on most hardware. But if one has fast hardware, so that SF-NNUE runs fast enough, then why not always use SF-NNUE?
How is stockfish defining 'quite balanced material positions', thanks.