Uri Blass wrote: ↑Mon Oct 25, 2021 3:48 pm
amanjpro wrote: ↑Mon Oct 25, 2021 12:54 pm
Rebel wrote: ↑Mon Oct 25, 2021 12:27 pm
Gabor Szots wrote: ↑Mon Oct 25, 2021 10:20 am
amanjpro wrote: ↑Mon Oct 25, 2021 4:24 amI believe most of the major rating list testers (CEGT, CCRL and others) are not interested in SF NNUE unless the net is trained solely on the engine's own games
That's true.
I think the issue is important enough open a discussion.
NNUE eval is the result from :
1. Training software (freely available)
2. Quality of the EPD's (hard own work)
3. NNUE implementation (freely available)
Since the elo is in the EPD (and is the gold digging part) I see no good reason reason to put a limitation on the creative part (the quality of the EPD's). Further it is discrimination to starters who are forced to write a good HCE eval first. Everybody should be free to create his own EPD database as he pleases, train it as he pleases, implement it as he pleases.
I see only one limitation, using an existing NNUE from someone else. It means the creative (and hard) part is skipped. Fire comes to mind, I don't test it.
I know an engine who started with PSQT, then trained a net on that and is already at 2600 probably... So the eval part doesn't need too be that good.
After all nnue is not amazing because of amazing eval, but because it's the result of a somewhat deep search and and eval.
Btw, most of the openbench engines have a completely different architecture as well as probing code than what's found in SF.
Zahak's is nothing more than a glorified PSQT
1)I read of somebody who got elo that is bigger than 3000 only by PSQT and search so 2600 with PSQT and a net is not impressive.
2600 is today low in computer chess.
Well, the point still applies, you create a very basic eval, generate data, train a net, then use the net to generate more data and train a new net, and etc. with 8 cores, you can generate at least 40M fens a day. Zahak's first network was trained over 57M fens (that is one day's worth of data)
and it was still able to beat my somewhat elaborate eval.
Uri Blass wrote: ↑Mon Oct 25, 2021 3:48 pm
2)I do not see the reason programmers care about more elo instead of caring about some knowledge that top programs do not have.
Again, if rating lists are here, if tournaments are here, then the elo-craze stays... we make it competition like, and then when somebody cares about it is blamed for it? I don't get the logic really...
I care about other things apart from making the engine stronger (MORE ELO), but so does most of the programmers, that is why you see MultiPV there, even though it adds no strength, or extra features like "own book", "searchmoves", "go mate" and etc...
Uri Blass wrote: ↑Mon Oct 25, 2021 3:48 pm
For example top programs do not know to evaluate correctly the following drawn position and you only need to have an evaluation function that simply ask the engine to play against itself at depth 5 and return the result of the game as the static evaluation to do it.
The evaluation of the engine may be expensive because of the time it needs to play against itself at depth 5 and the engine may be weaker but who care about it when the engine can see things that other engines do not see like the fact that the position is a draw even some plies earlier?
5 is of course an arbitrary number and people may change this number to make the static evaluation more expensive and more accurate if they like to do it.
[fen]2k5/1pP5/pP6/P7/8/8/P6B/4K3 w - - 0 1 [/fen]
Sorry I don't understand the message here