asanjuan wrote:
you are fitting game outcomes (1, 0 or 0.5) over
f(x) = 1/(1+exp(-k*x))
The pawn value shoud be the result of the optimisation process, not an arbitray number. Your goal is to minimize that function, not to obtain 1.00 for the pawn value, wich is a "cosmetic" result. So: you need to tune ALL parameter values.
For the record: Rhetoric uses also k=1 and I tune EVERY parameter.
Just try.
Well, my point was that you can fix either the EG value of a Pawn, or k. Fixing both probably removes too many degrees of freedom. So one can either
- Fix k and tune Value[P][EG]
- Fix Value[P][EG] and tune k
Now, k is not actually important to the engine because the engine is not concerned with mapping evaluation scores to predicted game results, only that a larger evaluation corresponds to better winning chances.
For cosmetic reasons, I would prefer to fix Value[P][EG] in the engine, but when tuning it's actually easier to fix k (at least in the way I implemented it, I just add the pawn value to the list of evaluation parameters to tune). You can of course do both after a fashion: fix k, tune the evaluation, then for the purpose of playing games rescale everything so that the end-game value of a pawn is fixed. The only drawback then is that you need to determine k again for the next run.
Anyway, allowing the pawn value to vary gives me this:
Code: Select all
MG EG
P 0.67 1.07
N 3.27 2.94
B 3.19 3.09
R 4.08 5.39
Q 9.13 9.93
BB 0.10 0.21
so the problem certainly looks to be smaller, but is not actually gone. I guess that means that to do this correctly, I do need some passed-pawn terms.