Experiments with eval tuning

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Experiments with eval tuning

Post by jdart »

I agree with your general point, although there is no reason why search parameters cannot be included in the tuning set, too.

--Jon
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Experiments with eval tuning

Post by bob »

jdart wrote:I agree with your general point, although there is no reason why search parameters cannot be included in the tuning set, too.

--Jon
There is one reason. Now your searches can be one or two plies. You have to go deep enough to exercise what is being tuned. LMR would be an ugly case since many use a dynamic R value that varies based on depth.

It would turn this from a huge computational task to an impossible one.
User avatar
Bloodbane
Posts: 154
Joined: Thu Oct 03, 2013 4:17 pm

Re: Experiments with eval tuning

Post by Bloodbane »

Do you remember how you generated neighbouring states for SA? The best thing I've been able to come up with is just randomly changing the values up or down using some normal distribution centered around the current values, and that doesn't seem like a good idea. I haven't tried implementing SA yet (for any domain) so it might be fun to try even if it will most likely fail.
Functional programming combines the flexibility and power of abstract mathematics with the intuitive clarity of abstract mathematics.
https://github.com/mAarnos
Henk
Posts: 7218
Joined: Mon May 27, 2013 10:31 am

Re: Experiments with eval tuning

Post by Henk »

Advantage of simulated annealing is that it is easy to implement. But still you have the problem of over fitting. So even an global optimum might not give a good evaluation for some type of positions it has never seen.

I doubt if it possible to create a training set that represents each possible type of position well (in chess of course).
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Experiments with eval tuning

Post by mvk »

jdart wrote:I agree with your general point, although there is no reason why search parameters cannot be included in the tuning set, too.
For search parameters, the 3M searches will become the bottleneck. So in my case I use a different method for those.
[Account deleted]
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Experiments with eval tuning

Post by jdart »

SA is not a good method, especially for large dimensions (many parameters). See http://coco.gforge.inria.fr/doku.php for some benchmarks and comparisons (for real-valued functions).

CMA-ES esp. with restart (BIBOP-CMA-ES) is pretty good. NOMAD is not bad for moderate-sized problems (<=50 vars).

As far as I can tell though state of the art is expensive commercial systems like Optquest or OQNLP from Tomlab. Next best is probably one of these:

1. Surrogate methods using a model such as Radial Basis Functions. See http://dblp.uni-trier.de/pers/hd/r/Regis:Rommel_G=. There are a few implementations for example http://people.sutd.edu.sg/~nannicini/in ... age=rbfopt.

2. Hybrid methods that use some kind of global search with local search for refining estimates. There are quite a few of these but one is PSwarm http://www.norg.uminho.pt/aivaz/pswarm/. Another is MA-CMA-Chains by D. Molina (http://sci2s.ugr.es/eamhco/#Software).

--Jon
Henk
Posts: 7218
Joined: Mon May 27, 2013 10:31 am

Re: Experiments with eval tuning

Post by Henk »

But we only need a solution that generalizes well for unseen data. So it might even be that a bad optimization algorithm finds such a solution quick enough.
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Experiments with eval tuning

Post by jdart »

If you start with a fairly bad configuration it is easy to find ways to improve it.

But you will get to a point where further tuning does not improve things. And at that point all you know is that you are at a good point. It may not be the best point you can reach.

The difference between ok and very good optimization methods is that the latter increase the chances of finding a global best solution, and also some of these methods try to find that solution with a relatively small number of evaluations.

--Jon