How Do You Automatically Tune Your Evaluation Tables

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: The texel evaluation function optimization algorithm

Post by AlvaroBegue »

tpetzke wrote:In my tests the values derived are often far away from intuitive values. I also repeated tuning runs that derived then very different parameter sets with almost equal strength.

So strange looking values are not so bad, however they might give you a hint about important and lesser important (maybe even redundant) parameters.
There are regularization methods to handle this type of situation. For instance, you could replace your error function E with

E' = E + some_constant * L1(parameters)

Here L1(parameters) means the sum of the absolute values of all the parameters in your evaluation function. (For this to make sense, you need to somehow normalize things in advance; so if your parameters are simply coefficients in front of features, you should normalize your features so they have mean 0 and variance 1, or something like that.)

These L1-regularization methods seem to be very much in fashion these days. They have some very neat properties. If you have two features that are nearly identical, so only the sum of its coefficients is fixed, a little bit of L1 regularization will make the coefficients be about equal. If you have a feature that is purely noise, it might very well get a coefficient of exactly 0, which is something that wouldn't normally happen with least-squares regression. You might be able to identify parts of your code that don't even need to run!

Has anyone tried anything like that for tuning evaluation functions in chess?
elcabesa
Posts: 855
Joined: Sun May 23, 2010 1:32 pm

Re: The texel evaluation function optimization algorithm

Post by elcabesa »

I'm trying it with vajolet and fixing endgame pawn to 100cp I get more or less 40cp for the pawn in the midgame :)
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: The texel evaluation function optimization algorithm

Post by AlvaroBegue »

Is it possible that your pawns get many other bonuses in the middlegame (pawn structure, king protection, creating outposts for pieces...)?
jdart
Posts: 4367
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: The texel evaluation function optimization algorithm

Post by jdart »

Note there are some other machine learning algorithms that might give better results:

1. Neural networks, especially GMDH and related ones that can handle non-linear relationships.

2. SVM (good learning algorithm but has some scalability issues with very large data sets).

3. AdaBoost (http://en.wikipedia.org/wiki/AdaBoost), iterative classification algorithm.

--Jon
User avatar
Steve Maughan
Posts: 1221
Joined: Wed Mar 08, 2006 8:28 pm
Location: Florida, USA

Re: The texel evaluation function optimization algorithm

Post by Steve Maughan »

petero2 wrote:(...)This typically gives about 8.8 million positions which are stored as FEN strings in a text file.(...)
Is there an off-the-shelf solution which can take a PGN file, turn the none- book moves into a FEN and add the result of the game at the end if the FEN?

Or do I need to roll my own parsing routine?

Thanks,

Steve
http://www.chessprogramming.net - Maverick Chess Engine
User avatar
Bloodbane
Posts: 154
Joined: Thu Oct 03, 2013 4:17 pm

Re: The texel evaluation function optimization algorithm

Post by Bloodbane »

There is a program called pgn2fen which does what it's name implies. Unfortunately it turns every position in the PGNs into FENs, including the starting position. So getting rid of the opening positions is a must. I wrote a program which scans the fen and checks what the movenumber is and if it is under 12 delete it. This is a pretty trivial task. Also, I created three files(white wins, draws and black wins) so you don't have to write the result anywhere, the file itself can store he result.
Functional programming combines the flexibility and power of abstract mathematics with the intuitive clarity of abstract mathematics.
https://github.com/mAarnos
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: The texel evaluation function optimization algorithm

Post by AlvaroBegue »

There is also a tool called pgn-extract (http://www.cs.kent.ac.uk/people/staff/djb/pgn-extract/) which can convert the moves in a PGN file to long algebraic notation, after which it is fairly trivial to parse yourself. That way you have more control of which positions to exclude.
brianr
Posts: 536
Joined: Thu Mar 09, 2006 3:01 pm

Re: The texel evaluation function optimization algorithm

Post by brianr »

Most, but not all of it, can be done with a combination of the existing tools like pgn-extract and epdFin (in the 40H-EPD Norm Pollock tools collection).

However, for the Texel-type learning the end result of the game is also needed, so I just cobbled together my own routine. It is only run when building a new epd test set anyway.

Message me if you would like the rather raw code.
tpetzke
Posts: 686
Joined: Thu Mar 03, 2011 4:57 pm
Location: Germany

Re: The texel evaluation function optimization algorithm

Post by tpetzke »

Hi Steve,

as I'm interested only in quiet positions I have to feed all positions to my engine anyway to check whether they are quiet. In this process I also skip the early positions and positions with a score > 600cp.

From the remaining positions I pick a random one from the game and write it along with the result to an output file.

So I use

1. pgn-extract to classify the games into won, draw or lost for White positions
2. pgn2fen filename -w to convert the games from the above files into a sequence of fens with white to move
3. feed the fens to my engine to filter them

The result is then my input for the tuning

Thomas...
Thomas...

=======
http://macechess.blogspot.com - iCE Chess Engine
Jan Brouwer
Posts: 201
Joined: Thu Mar 22, 2007 7:12 pm
Location: Netherlands

Re: The texel evaluation function optimization algorithm

Post by Jan Brouwer »

AlvaroBegue wrote:
tpetzke wrote:In my tests the values derived are often far away from intuitive values. I also repeated tuning runs that derived then very different parameter sets with almost equal strength.

So strange looking values are not so bad, however they might give you a hint about important and lesser important (maybe even redundant) parameters.
There are regularization methods to handle this type of situation. For instance, you could replace your error function E with

E' = E + some_constant * L1(parameters)

Here L1(parameters) means the sum of the absolute values of all the parameters in your evaluation function. (For this to make sense, you need to somehow normalize things in advance; so if your parameters are simply coefficients in front of features, you should normalize your features so they have mean 0 and variance 1, or something like that.)

These L1-regularization methods seem to be very much in fashion these days. They have some very neat properties. If you have two features that are nearly identical, so only the sum of its coefficients is fixed, a little bit of L1 regularization will make the coefficients be about equal. If you have a feature that is purely noise, it might very well get a coefficient of exactly 0, which is something that wouldn't normally happen with least-squares regression. You might be able to identify parts of your code that don't even need to run!

Has anyone tried anything like that for tuning evaluation functions in chess?
Thanks for mentioning this.
I was unaware of regularization and it does indeed seem appropriate to the tuning of evaluation features or even selection of evaluation features.