Tuning search parameters

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Tuning search parameters

Post by niel5946 »

Hi :)

In the "progress on Loki" thread I created recently, I have written about my SPSA evaluation, texel tuning framework, which IMO works quite nicely. I think the gain from tuning the evaluation function of Loki was around 100-150 elo, and now I am beginning to think about search tuning.
I know some different gradient descent algorithms, but the problem is: I can't think of a way to calculate some sort of error function of the search... How do other people do it? By self-play?

I also know genetic algorithms can be used, but it seems a little cumbersome to convert every value to a string of bits (which also sets a boundary to the size of the parameter), so I think there are better ways to do this.

How are search parameters tuned normally?

Thanks in advance :D
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Tuning search parameters

Post by Ferdy »

You can try the following.

Chess Tuning Tools
Lakas
Optuna Game Parameter Tuner

I developed the last 2.

They can optimize search or eval params or combination thereof. Some optimizers have genetics algo both in Lakas and Optuna. Lakas uses the nevergrad framework while optuna uses the Optuna framework.

Aside from genetics algo Lakas has an optimizer based on bayesian optimization library. While Optuna tuner has scikit-optimize.
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Tuning search parameters

Post by niel5946 »

Ferdy wrote: Sun Apr 18, 2021 4:10 pm You can try the following.

Chess Tuning Tools
Lakas
Optuna Game Parameter Tuner

I developed the last 2.

They can optimize search or eval params or combination thereof. Some optimizers have genetics algo both in Lakas and Optuna. Lakas uses the nevergrad framework while optuna uses the Optuna framework.

Aside from genetics algo Lakas has an optimizer based on bayesian optimization library. While Optuna tuner has scikit-optimize.
Thank you for the references! :D

I have looked trough the code a bit, but can't really find out what is what. I wil try to study it a little more...

How are the objective functions measured in the tuners? And how is the fitness function of the genetic algorithms calculated? By self-play tournament results?
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Tuning search parameters

Post by Ferdy »

niel5946 wrote: Mon Apr 19, 2021 10:05 am
Ferdy wrote: Sun Apr 18, 2021 4:10 pm You can try the following.

Chess Tuning Tools
Lakas
Optuna Game Parameter Tuner

I developed the last 2.

They can optimize search or eval params or combination thereof. Some optimizers have genetics algo both in Lakas and Optuna. Lakas uses the nevergrad framework while optuna uses the Optuna framework.

Aside from genetics algo Lakas has an optimizer based on bayesian optimization library. While Optuna tuner has scikit-optimize.
Thank you for the references! :D

I have looked trough the code a bit, but can't really find out what is what. I wil try to study it a little more...

How are the objective functions measured in the tuners? And how is the fitness function of the genetic algorithms calculated? By self-play tournament results?
For Optuna tuner and Lakas, the objective is the result of base_engine vs test_engine match. The base_engine takes the default param values while the test_engine takes the suggested param values from the optimizer.

Two modes of using the best param.
1. Use the best param found so far against the param values suggested by the optimizer. Default in Optuna tuner. You can use

Code: Select all

--fix-base-param 
flag to enable mode 2.


2. Do not use the best param found so far against the param values suggested by the optimizer. Instead always use the default param values against the param values suggested by the optimizer. (Default in Lakas). You can use

Code: Select all

--use-best-param 
flag to enable mode 1.

Optuna tuner supports CMAES. For the code, you can visit the optuna site as this is open source.
Some references:
https://en.wikipedia.org/wiki/CMA-ES
https://arxiv.org/abs/1604.00772
https://optuna.readthedocs.io/en/stable ... aEsSampler

Nevergrad used by Lakas is also open source you can check the details of each optimizer.
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Tuning search parameters

Post by niel5946 »

Alright. Thank you very much for your help!
Search tuning has been on my mind for quite some time, so I am happy to have gotten a place to start :D
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
Kiudee
Posts: 29
Joined: Tue Feb 02, 2010 10:12 pm
Location: Germany
Full name: Karlson Pfannschmidt

Re: Tuning search parameters

Post by Kiudee »

I am the developer of the library chess-tuning-tools.
Since you asked how the objective function is evaluated: chess-tuning-tools utilizes cutechess-cli to run matches with paired openings against a reference engine. The match result is then used in a Bayesian pentanomial model to extract an Elo score as well as an estimate of the uncertainty.

Feel free to drop me a message, if anything is unclear or if you find bugs (of course using GitHub issues for that is even better).