Yet another parameter tuner using optuna framework

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

chrisw
Posts: 4313
Joined: Tue Apr 03, 2012 4:28 pm

Re: Yet another parameter tuner using optuna framework

Post by chrisw »

Joerg Oster wrote: Wed Sep 16, 2020 8:33 pm
chrisw wrote: Wed Sep 16, 2020 5:30 pm
Joerg Oster wrote: Wed Sep 16, 2020 4:30 pm
Ferdy wrote: Wed Sep 16, 2020 3:17 pm
3. I'm not sure if parameter changes to a 'quick match' change all five parameters at a time, or just one?
Sorry I don't understand the question.
I guess he wants to know if the tuner changes all parameters at once or one by one for a new trial.
In the document they refer to this as relational sampling and independent sampling.
Yup, pretty much what I meant
If I understand it correctly, Optuna will eventually do both and also a mixture of both, to find out about the correlation of the parameters.
The graphs have labels “importance of parameter”, so I intuited that to mean the process was zeroing in on where to focus, so to speak. Floundering a bit for words because it’s not clear how or if.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Yet another parameter tuner using optuna framework

Post by Ferdy »

Joerg Oster wrote: Wed Sep 16, 2020 4:30 pm
Ferdy wrote: Wed Sep 16, 2020 3:17 pm
3. I'm not sure if parameter changes to a 'quick match' change all five parameters at a time, or just one?
Sorry I don't understand the question.
I guess he wants to know if the tuner changes all parameters at once or one by one for a new trial.
In the document they refer to this as relational sampling and independent sampling.
We can ask the optimizer either one by one or all at once. I did all at once. https://github.com/fsmosca/Optuna-Game- ... ner.py#L65

Example.

Code: Select all

pawn_value = trial.suggest_int('pawn_value', 50, 150, 2)
The parameters are param_name=pawn_value, minimum=50, maximum=150, step=2. The step 2 is useful for controlling the amount of increments applied to parameter values.

If I want more before making a trial run, I can ask for rook value for example.

Code: Select all

rook_value = trial.suggest_int('rook_value', 400, 600, 4)
Then in the next trial or game match, the test engine will take the pawn_value and rook_value above and the base engine will take the current best param values or in the case for trial 0, the base engine will take the default values or initial param values.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Yet another parameter tuner using optuna framework

Post by Ferdy »

Joerg Oster wrote: Wed Sep 16, 2020 4:53 pm
Ferdy wrote: Wed Sep 16, 2020 3:17 pm
2. Uses cutechess tournament mode, with output the result of a quick match (25 rounds?)
Yes, number of game is settable, the more games the better.
The number of games per trial is probably dependent on the sensitivity of the parameters, no?
24 games seems very small in any case.
There can be parameter that does need too much games per trial. 24 can be high or low. There can be other conditions, like time control used, the number of parameters being tuned and the threshold of best value. To save optimization time one can start at lower number games. But what matter is after the trials the parameter values suggested by the optimizer improves over the default values.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Yet another parameter tuner using optuna framework

Post by Ferdy »

chrisw wrote: Wed Sep 16, 2020 5:41 pm Looks very good. I still need to wrap my head around the graphs, will try later.

One thing, from your Github (and I don't want to flood you with suggested mods, I know how irritating that can be):
Second in order for the parameter values to be considered the best and replace the old best, it has to defeat the old best by more than 0.55 or 55% score. Normally this is only 0.5 or 50%.
I tried something like this in the past (except with random kicks to the parameters, not smart ones as you are doing with Optima), where the parameters, P1, give a better result than P0 - was to move P fractionally towards P1, rather than the full thing.
This could work in the case of the close 50-55% range. Just an idea, but I expect you are full of ideas already!
That can be tried for sure. I have not yet looked at the code inside the optimizer. It might already have a gradient calculated and automatic learning rate adjusted depending on the result of the match that we are sending to it after every trial.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Yet another parameter tuner using optuna framework

Post by Ferdy »

No4b wrote: Wed Sep 16, 2020 8:58 pm Very interesting tool!
I will definitely try to work with it.

Am i understanding correctly that the engine itself obtain parameters via command line fe as "QueenValueOp=975"?
Not at the moment (will add it later), you need to modify the code around here https://github.com/fsmosca/Optuna-Game- ... er.py#L197
And I need only one copy of the engine in the folder, tuner just will execute two copies of it with a different parameters set?
Correct.

The engine can be anywhere you can also specify an absolute path.

Code: Select all

python tuner.py --engine c:/chess/engines/enginefoldername/engineexefilename ...
Kiudee
Posts: 29
Joined: Tue Feb 02, 2010 10:12 pm
Location: Germany
Full name: Karlson Pfannschmidt

Re: Yet another parameter tuner using optuna framework

Post by Kiudee »

Thanks Ferdy for making your tool available! I like some of the plots you output for the results.
I wanted to point out that Optuna uses tree-structured parzen estimators as their model, which does not model interactions between parameters. Tools based on Gaussian processes (like the chess-tuning-tools or the tool released by thomasahle) take all interactions into account and thus are able to interpolate/extrapolate much more accurately.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Yet another parameter tuner using optuna framework

Post by Ferdy »

Kiudee wrote: Thu Sep 17, 2020 1:01 am Thanks Ferdy for making your tool available! I like some of the plots you output for the results.
I wanted to point out that Optuna uses tree-structured parzen estimators as their model, which does not model interactions between parameters. Tools based on Gaussian processes (like the chess-tuning-tools or the tool released by thomasahle) take all interactions into account and thus are able to interpolate/extrapolate much more accurately.
Thanks for the info. The one with thomasahle takes a lot of memory tried it before. I have not yet tried your chess tuning tools. Will try it someday and compare it with optuna.
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Yet another parameter tuner using optuna framework

Post by mvanthoor »

At some point in the future, I'll have to look into this, or other similar tools. Thanks :)
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
Joerg Oster
Posts: 937
Joined: Fri Mar 10, 2006 4:29 pm
Location: Germany

Re: Yet another parameter tuner using optuna framework

Post by Joerg Oster »

Ferdy wrote: Thu Sep 17, 2020 1:07 am
Kiudee wrote: Thu Sep 17, 2020 1:01 am Thanks Ferdy for making your tool available! I like some of the plots you output for the results.
I wanted to point out that Optuna uses tree-structured parzen estimators as their model, which does not model interactions between parameters. Tools based on Gaussian processes (like the chess-tuning-tools or the tool released by thomasahle) take all interactions into account and thus are able to interpolate/extrapolate much more accurately.
Thanks for the info. The one with thomasahle takes a lot of memory tried it before. I have not yet tried your chess tuning tools. Will try it someday and compare it with optuna.
Nevergrad might also be an interesting alternative.
It offers a wide variety of optimization methods,
and has a nice ask and tell interface.
Jörg Oster
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Yet another parameter tuner using optuna framework

Post by jdart »

Interesting. I also found this software:

https://github.com/automl/HpBandSter

which seems kind of similar. I have tried this sort of thing before especially for search parameters, but the problem I've found is that the effect of varying these can be quite small. So you are trying to find the optimum point but basically on a very "flat" surface, and furthermore with a method that produces noisy objective measures. It is therefore hard to get convergence.