What i do:
1.) Extract pawn king eval for 0.7M position at depth 6. (cp eval for MG and EG)
2.) Befor training eval all positions with qsearch same as like above.
3.) During 2. point fill a input array with pawn and king positions. (0 for empty square and 1 for piece)
4.) During 2.point fill a output (target) array with activated eval difference:
Code: Select all
var output = [
Activations.chess((currEval.MG > evalMG ? -1 : 1) * Math.abs(evalMG - currEval.MG)),
Activations.chess((currEval.EG > evalEG ? -1 : 1) * Math.abs(evalEG - currEval.EG))
];
Code: Select all
1 / (1 + Math.pow(10, (-x / 400)))
6.) After training i get the output in cp, because i don't use activation on the output layer. (it's not true for hidden layer)
The training is currently running with batch size: 1024 and Adam optimizer.
The network is full connected, work with matrices and probably stable, i tested with a few logic gate.
I don't know the results yet.
My question would be, has anyone tried anything like this before? (What results can I expect?)
What should be changed?
Best practices?
(I'm new in NN and i don't want to use machine learning platform.)
Thanks,
Tamás