Hi Gary,gladius wrote:Hi Peter,petero2 wrote: The average error E is now a function of the M parameters. Find parameter values such that E is a local minimum in parameter space. The exact optimization method is not that important. It could for example use local search varying one parameter at a time, the Gauss-Newton method, the conjugate gradient method, or a hybrid approach mixing those methods.

If the evaluation function parameters are all of integer type with finite range (which they typically are in a chess engine), the local search method is guaranteed to eventually terminate since there is only a finite number of elements in the parameter space. Also in this case the conventional gradient isn't mathematically defined, but you can use difference quotients instead of the "real" gradient in the GN and CG methods.

This optimization method sounds really nice, and thanks for the really clear explanation! I'd been wanting to try something like it for a long time.

I've got the engine (Stockfish [1]) all hooked up to expose it's internal eval features, and built up the fen database with game results from recent games. The part I don't seem to be moving very fast on is the actual optimization :). My math skills are on the lower side, so I was just trying out the scipy optimization toolkit. It has a congujate-gradient method built in, but it doesn't seem to be doing a great job so far [2].

Can you be more specific by what you meant by using the difference quotient?

Thanks!

Gary

[1] https://github.com/glinscott/Stockfish/ ... r...tuning

[2] https://gist.github.com/glinscott/8814218

What I meant with difference quotient is indeed what Álvaro already explained. To be more specific, instead of trying to compute dE/dp, compute (E(p+1)-E(p-1))/2, where p is one parameter. Do this for all parameters to obtain the gradient.

I would like to stress again that so far I have not used the conjugate gradient or the Gauss Newton methods, so I don't know how well they will work. If E and its derivatives are smooth enough the methods should work well, but there is no guarantee that E is smooth enough.

What I have been doing is using local search, which basically works like this:

Code: Select all

```
vector<int> localOptimize(const vector<int>& initialGuess) {
const int nParams = initialGuess.size();
double bestE = E(initialGuess);
vector<int> bestParValues = initialGuess;
while (true) {
bool improved = false;
for (int pi = 0; pi < nParams; pi++) {
vector<int> newParValues = bestParValues;
newParValues[pi] += 1;
double newE = E(newParValues);
if (newE < bestE) {
bestE = newE;
bestParValues = newParValues;
improved = true;
} else {
newParValues[pi] -= 2;
newE = E(newParValues);
if (newE < bestE) {
bestE = newE;
bestParValues = newParValues;
improved = true;
}
}
}
if (!improved)
break;
}
return bestParValues;
}
```

1. If I find that increasing a parameter improves E, I keep increasing this parameter until there is no further improvement. Similarly if I find that decreasing a parameter improves E.

2. Instead of just looping over all parameters repeatedly in the same order, I keep statistics about how often and by how much changing a parameter has improved E. Parameters with a good history are tried more often than other parameters. (I don't know how much this really helps but it seemed to be a reasonable approach.)

In texel I also have information about which parameter values are equal to each other by design. For example in the bishop endgame PST, the a1, h1, a8 and h8 squares have the same value by design, so only one parameter for all these squares is exposed to the optimization code. Reducing the number of parameters means you can get stable results with a lower number of games.

Finally I want to mention that I used a very fast time control (1+0.08) for the games I collected the FEN positions from. Even though this was partially out of convenience since I already have such games, it is also quite likely that using lower quality games improves the tuning. By using games that contains some tactical mistakes the evaluation function has a chance to learn how to score unbalanced positions, not just positions that would occur in high quality games. I have heard that this is an important property when tuning othello evaluation functions, and I guess it is important for chess too.