Actually it alone can not at all.DustyMonkey wrote: ↑Mon Jan 27, 2020 5:14 pm an autoencoder cannot reduce the state much beyond a well packed traditional encoding
Board adaptive / tuning evaluation function - no NN/AI
Moderators: hgm, Rebel, chrisw
-
- Posts: 17
- Joined: Sat Jan 11, 2020 3:52 pm
- Full name: Moritz Gedig
Re: Representation in metric Space
-
- Posts: 17
- Joined: Sat Jan 11, 2020 3:52 pm
- Full name: Moritz Gedig
Re: More details
Reading myself I found it too hard to understand to not clarify.
By that I meant the board setting of which the eval() was propagated.
I was talking about root moves, that now have a propagated value and a direct evaluation.After each search we got pairs of states of approximately the same value even though the [immediate] eval() does yield different values.
For every move with two evaluations we know that our eval() judges it too low or to high and can try to find out why.We know that our eval() is wrong in a particular direction.
They can only be different by turn, thus few (two) pieces make the difference.Because all root move result states can not be too different it should not be hard to figure out what made the difference.