Parameter tuning

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
Onno Garms
Posts: 224
Joined: Mon Mar 12, 2007 6:31 pm
Location: Bonn, Germany

Parameter tuning

Post by Onno Garms » Sun Mar 13, 2011 7:43 pm

I believe that one of the core ingredients of Rybka is some automated
parameter tuning. See also
http://talkchess.com/forum/viewtopic.ph ... 51&t=18864
While much of Rybka has been taken apart now, parameter tuning
remains one of Vas's secrets.

I tend to believe that his way of parameter tuning is superior to
both the approaches below. I also assume that it is not his skills as
IM that allows him to do this parameter tuning, but some automated
process that is unknown to the community.

There have been approaches to automatic parameter tuning before. One
is the deep blue way:
http://chessprogramming.wikispaces.com/Andreas+Nowatzyk

Another suggestion was made by Lars Bremer:
http://www.computerschach.de/index.php? ... &Itemid=41
See "Genetische Algorithmen in Denkspielen", link requires paid
registration and is in German.

Lars basically suggests to make a population where values for various
parameters are the genes. Copies of one chess engine that differ only
by the parameters decode the genes to engines. Fitness is playing
strength.


I tried to put this approach into practice, with some success but not
a breakthrough. I found weights for the basic evaluation features
(such as material - normalized to weight factor 1.0 - mobility, pawn
structure etc.). These weights were compiled into Onno and I kept them
in the released versions.

With the new values, Onno did better in my tests then before,
but I could not prove statistical significance. However, the new
weights where fairly different from the original weights, so this
procedure was definitely not one that "worked" by not changing
much. Random modification of weights by the same amount is
harmful. But I might just have been lucky with my GA results. Further
investigation by other people should answer that.

More computational power might help to improve the results. With
implementation of the genetic algorithm I followed the "Handbook of
Metaheuristics" by Glover/Kochenberger.

There is much room for improvement in the details below. I just post
how I did it:

Tournament style
================
Lars suggests to play Swiss tournaments. I opted for a smaller
population and played a round system. In later rounds, games between
individuals at the end of the table that do not influence the selected
individuals should be pruned (not implemented). I played 100000 nodes
per move.

Selection
=========
Selecting openings at random and playing not too many games introduces
enough randomness into the results. Therefore, I did the selection
deterministically. I select the first selected_size individuals. The
tournament winner gets the most "votes", runners up get a lower number
of votes. Overall like this:

Code: Select all

double rank_to_votes (int p_rank)
{
  int boni = OperationsRc::selected_size - 1 - p_rank;
  
  if &#40;boni < 0&#41;
  &#123;
    return 0.0;
  &#125;

  else
  &#123;
    double k = OperationsRc&#58;&#58;selected_size;
    double n = OperationsRc&#58;&#58;population_size;
    double a = 2 * &#40;n-k&#41; / &#40;n*k*&#40;k-1&#41;);
    return 1/n + a*boni;
  &#125;
&#125;
Then the "votes" are converted into multiplicities how often each
individual is selected for the next population. This is done by the
d'Hondt algorithm:
http://en.wikipedia.org/wiki/D%27Hondt_method

Operations
==========
There is crossover and mutation. Details are not worth to be mentioned.

Selection of final individual
=============================
To find the individual that undergoes normal gameplay tests, average
values from the population seem to work better than the best
individual. Note that these average values differ much from the
average values in the first population.

User avatar
Onno Garms
Posts: 224
Joined: Mon Mar 12, 2007 6:31 pm
Location: Bonn, Germany

Re: Parameter tuning

Post by Onno Garms » Sun Mar 20, 2011 12:38 pm

Onno Garms wrote:I believe that one of the core ingredients of Rybka is some automated
parameter tuning. See also
http://talkchess.com/forum/viewtopic.ph ... 51&t=18864
While much of Rybka has been taken apart now, parameter tuning
remains one of Vas's secrets.

I tend to believe that his way of parameter tuning is superior to
both the approaches below. I also assume that it is not his skills as
IM that allows him to do this parameter tuning, but some automated
process that is unknown to the community.
At least not completely unknown. See Anthony's letter, which also emphasizes the importance of parameter tuning.
http://www.acoz.net/zappa/rybka-cloning/

Jan Brouwer
Posts: 201
Joined: Thu Mar 22, 2007 6:12 pm
Location: Netherlands

Re: Parameter tuning

Post by Jan Brouwer » Sun Mar 20, 2011 1:34 pm

Onno Garms wrote:
Onno Garms wrote:I believe that one of the core ingredients of Rybka is some automated
parameter tuning. See also
http://talkchess.com/forum/viewtopic.ph ... 51&t=18864
While much of Rybka has been taken apart now, parameter tuning
remains one of Vas's secrets.

I tend to believe that his way of parameter tuning is superior to
both the approaches below. I also assume that it is not his skills as
IM that allows him to do this parameter tuning, but some automated
process that is unknown to the community.
At least not completely unknown. See Anthony's letter, which also emphasizes the importance of parameter tuning.
http://www.acoz.net/zappa/rybka-cloning/
Also, the Stockfish team uses automated tuning.
Joona has hinted in the past that they can tune a significant number of parameters in one day.

FlavusSnow
Posts: 89
Joined: Thu Apr 01, 2010 3:28 am
Location: Omaha, NE

Re: Parameter tuning

Post by FlavusSnow » Sun Mar 20, 2011 3:25 pm

I find it somewhat ironic that most engine developers are quick to scorn tuning engine parameters to one position or based on one game (because you may fix one position and break 10 positions), yet they will do 100,000 games tuning their engine solely against itself.

I'll admit that x' vs. x matches are more scientific than tuning to single positions or single games, but when you tune an engine solely against itself you're making several assumptions that can't truly be substantiated. A couple that come to mind:

1. if engine x' performs better against engine x, then it will perform better against a pool of different engines a through z.

2. there is one 'best' set of values for engine x parameters, and that this set of values is able to be found by fine tuning a single or limited number of initial sets of starting parameters.

3. search parameters are independent of evaluation parameters. e.g. you're holding the criteria of the search constant and varying the criteria of the evaluation. I realize this is not always the case, because some or many of the prune/extension triggers are somehow based on the evaluation function. But a good example would be if the same eval were used for null move pruning as was used for the regular search; this assumes that the engine is concerned with the same criteria in a null move search as it is with a regular search. I don't know how we can conclude that without tuning each independently.

bob
Posts: 20563
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Parameter tuning

Post by bob » Sun Mar 20, 2011 6:34 pm

FlavusSnow wrote:I find it somewhat ironic that most engine developers are quick to scorn tuning engine parameters to one position or based on one game (because you may fix one position and break 10 positions), yet they will do 100,000 games tuning their engine solely against itself.

I'll admit that x' vs. x matches are more scientific than tuning to single positions or single games, but when you tune an engine solely against itself you're making several assumptions that can't truly be substantiated. A couple that come to mind:

1. if engine x' performs better against engine x, then it will perform better against a pool of different engines a through z.

2. there is one 'best' set of values for engine x parameters, and that this set of values is able to be found by fine tuning a single or limited number of initial sets of starting parameters.

3. search parameters are independent of evaluation parameters. e.g. you're holding the criteria of the search constant and varying the criteria of the evaluation. I realize this is not always the case, because some or many of the prune/extension triggers are somehow based on the evaluation function. But a good example would be if the same eval were used for null move pruning as was used for the regular search; this assumes that the engine is concerned with the same criteria in a null move search as it is with a regular search. I don't know how we can conclude that without tuning each independently.
I happen to agree. If your engine doesn't understand something, say king safety, then it will be unable to defend very well, but it also won't attack very well, so you can get the scores out of whack without knowing. I _never_ test crafty vs crafty except to make sure that the program actually functions after I change something significant. All of my testing is against other opponents to avoid that type of "incestuous" mistake.

User avatar
Onno Garms
Posts: 224
Joined: Mon Mar 12, 2007 6:31 pm
Location: Bonn, Germany

Re: Parameter tuning

Post by Onno Garms » Sun Mar 20, 2011 7:16 pm

I agree too. Most likely results would have been better if I had integrated other engines in the genetic algorithm. In other tests (I used games at fixed node count and 1'+1" blitz games) I only played against other engines.

To play at 100.000 nodes/move against other engines I still used the UCI or Winboard protocol. If one would want much faster games, there would be need to modify the other engine's source and link it against the tournament framework.

muxecoid
Posts: 150
Joined: Sat Jan 30, 2010 9:54 am
Location: Israel

Re: Parameter tuning

Post by muxecoid » Tue Mar 22, 2011 8:46 pm

The textbook solutions to optimization problem are genetic algorithms and simulated annealing. Did anyone try simulated annealing as a way to tune chess engine parameters?

User avatar
marcelk
Posts: 348
Joined: Fri Feb 26, 2010 11:21 pm
Contact:

Re: Parameter tuning

Post by marcelk » Wed Mar 23, 2011 9:48 am

muxecoid wrote:The textbook solutions to optimization problem are genetic algorithms and simulated annealing. Did anyone try simulated annealing as a way to tune chess engine parameters?
I use a LSQ fit method instead. Much simpler.

wgarvin
Posts: 838
Joined: Thu Jul 05, 2007 3:03 pm
Location: British Columbia, Canada

Re: Parameter tuning

Post by wgarvin » Wed Mar 23, 2011 12:07 pm

Back in the late 90's, there were some experiments done with KnightCap and automated parameter tuning using a TD lambda method: http://citeseerx.ist.psu.edu/viewdoc/su ... .1.54.8263

FlavusSnow
Posts: 89
Joined: Thu Apr 01, 2010 3:28 am
Location: Omaha, NE

Re: Parameter tuning

Post by FlavusSnow » Wed Mar 23, 2011 1:38 pm

Does anyone know of a competitive engine whose parameters are loaded from an ini file? I'm a real fan of these types of tuning algorithms and I'd like to be able to just adjust an ini file instead of recompiling a program in order to tune it.

Post Reply