I've got the engine (Stockfish ) all hooked up to expose it's internal eval features, and built up the fen database with game results from recent games. The part I don't seem to be moving very fast on is the actual optimization
. My math skills are on the lower side, so I was just trying out the scipy optimization toolkit. It has a congujate-gradient method built in, but it doesn't seem to be doing a great job so far .
I would try with BFGS or L-BFGS (I think you can find BFGS in scipy, and there is a C library called liblbfgs for the other one; either one should be fine for this situation).
Can you be more specific by what you meant by using the difference quotient?
The gradient of a function is just the vector formed by the partial derivatives with respect to each parameter. You can compute that symbolically, or you can just tweak the parameter a little bit and see how much the function's value changes in response. The partial derivative is the limit of that quotient when the size of the tweak goes to zero. Using just a small tweak is probably what he means by "difference quotient".
Here's another idea I've been working with: Make your evaluation function a template so it can handle different types to represent scores. Now create a type `Jet' that has both a value and a gradient. C++ allows you to overload operators to make this work. So you define the sum of two Jets as the Jet whose value is the sum of the values of the two Jets and whose derivatives are the sums of the derivatives. You defined the product of two Jets as multiplying the values and using the rule for differentiating products to compute the derivatives.
When treated as a Jet, a constant just has a value and its derivatives are 0. When handling parameters that we want to tune, we give them their current value and the derivatives are all 0 except the one that corresponds to it, which is 1.
When you use your evaluation function with the type Jet, you'll automatically get the gradient without any additional programming!! You can then very easily compute your error function using Jets and you'll get of gradient of that, which you can then supply to BFGS or L-BFGS for optimization.
If this is not very clear, I can post some sample code.