Texel Tuning

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Texel Tuning

Post by Ferdy »

tomitank wrote: Sat Jul 07, 2018 4:28 pm
Ferdy wrote: Sat Jul 07, 2018 2:50 pm
tomitank wrote: Fri Jul 06, 2018 11:13 pm But if you start all the material values from zero, it does not reach normal values.
eg: (100, 300, 300, 500, 900)

this solution helps...get closer to it

Is it possible with Texel Method at all? (reach the normal piece value from zero)
Yes it is possible, tried it even using 20k training pos, stopped it after some iterations with queen still increasing. I use an increment or steps of
[+5, -5, +10, -10], K=0.65.
Thanks, the graph is very good!

I believe that increment is also important. Sometimes it happens, that at half of the normal value, the tuning stops.

Did you calculated earlier the currently "K" (0.65) for 20k positions?
Or did you used the "k optimizer" function again (with zero piece values), at the beginning of this "tuning"?

(My "K" = 1.5 with zero pieces value and 1.7 with "normal" values)

If I good understand, then I must be still use the K = 1.7 value for the current training set, even if the next K has changed (after the tuning). So the current scaling does not change.

Short:
When should i "save" (and never change with current training set) the value of K?
1 .: Before each tuning method (eg tuned parameters already exist)
2 .: Once. (eg: all parameters are zero)

Thanks your help!

Tamás
K=0.65 is just a sample.

Normally you start a value of K from your current best parameters.

I use a different way of finding the K of my engine, collect positions where the side to move has an advantage of around 1 pawn or 100 cp according to strong engine say SF, and make this as startpos, create actual game matches (against different opponents, it is better to use stronger opponents) using those startpos where your engine get the advantage side.

Then calculate some stats, i.e wins, draws, loses.
scoring_rate = (wins + draws/2) / total_games

That scoring_rate will be your sigmoid for a 1 pawn ahead.

sigmoid = 1 / (1 + 10^(-score*K/400))

Solve for K

Given:
sigmoid = scoring_rate
score = 100, or 1 pawn

K = -400/score * log10((1/sigmoid) - 1)

Have a look on this thread too.
http://talkchess.com/forum3/viewtopic.p ... el#p764431

The stronger the engine the higher its K value.

If you give SF a 1 pawn advantage, it may score an average of 75% or more against some engines. Its K, using 0.75 scoring rate could be,
K = -400/100 * log10(1/0.75 - 1)
K = 1.9

If you give your engine a 1 pawn advantage against some top engines, maybe it can score 60% or more. Its K using 0.6 could be,
K = -400/100 * log10(1/0.6 - 1)
K = 0.7

The advantage of using a smaller K value is that you will have a higher error from the start. The higher the error the better it is to tune your parameters, you have more room to vary different values across different parameters. This is suitable for Texel tuning because it tries to change parameter values in sequence. The parameters that will be tried first has more chances to be able to reduce the errors. Smaller increment is recommended so that later parameters will have chances to change and yet still be able to reduce the error.

K = 1.9
score = 100
sigmoid = 0.75
result = 1
error = 1 - 0.75 = 0.25
error_sq = error^2

K = 0.7
score = 100
sigmoid = 0.6
result = 1
error = 1 - 0.6 = 0.4
error_sq = error^2

There are engines that are weaker in ending, so even with a high score, its scoring rate is low. So it is better to have a collection of startpos at different phases of the game.
tomitank wrote:When should i "save" (and never change with current training set) the value of K?
1 .: Before each tuning method (eg tuned parameters already exist)
2 .: Once. (eg: all parameters are zero)
Answer: 1
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Texel Tuning

Post by jdart »

This is suitable for Texel tuning because it tries to change parameter values in sequence.
There is no need to do this. Any gradient descent method will work, and most of those tune all parameter simultaneously.

--Jon
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Texel Tuning

Post by Sven »

tomitank wrote: Sat Jul 07, 2018 4:28 pm
Sven wrote: Sat Jul 07, 2018 12:35 pm
tomitank wrote: Fri Jul 06, 2018 11:13 pm Here is my actual code:
https://github.com/tomitank/tomitankChe ... /tuning.js
You restrict the set of training positions to those with an eval between -600 and +600 (function good_tuning_value()). But you do not decide this once in the beginning but every time you visit a position. So the decision always depends on the current parameter values. This may cause some instability: positions are sometimes included (if eval fits the interval) and sometimes excluded (if it doesn't fit). Therefore you might get a wrong comparison of errors due to different position sets being compared, so your algorithm could fail to find the correct minimal error.

I suggest to let good_tuning_value() always return true, and see if that helps. Filtering of positions should be done outside the tuning process, which will also improve overall performance as a side effect (by avoiding useless eval/qsearch calls for positions which you exclude anyway).
Thanks, I'll try it!
There is another issue, probably a minor one. Your function compute_optimal_k() starts at i = -1 and tries all values up to +2 in steps of 0.1. But K <= 0 does not make much sense since you usually want Sigmoid(K,S) to return a value > 0.5 for positive S and < 0.5 for negative S. So it would be sufficient to start at i = 0.1 in your case.
Sven Schüle (engine author: Jumbo, KnockOut, Surprise)
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: Texel Tuning

Post by tomitank »

Sven wrote: Sat Jul 07, 2018 12:35 pm
tomitank wrote: Fri Jul 06, 2018 11:13 pm Here is my actual code:
https://github.com/tomitank/tomitankChe ... /tuning.js
You restrict the set of training positions to those with an eval between -600 and +600 (function good_tuning_value()). But you do not decide this once in the beginning but every time you visit a position. So the decision always depends on the current parameter values. This may cause some instability: positions are sometimes included (if eval fits the interval) and sometimes excluded (if it doesn't fit). Therefore you might get a wrong comparison of errors due to different position sets being compared, so your algorithm could fail to find the correct minimal error.

I suggest to let good_tuning_value() always return true, and see if that helps. Filtering of positions should be done outside the tuning process, which will also improve overall performance as a side effect (by avoiding useless eval/qsearch calls for positions which you exclude anyway).
I tested only with ~10K position and here is the results:

Pre-selected Only Good Value (abs(score) < 600)
Final pawn value: 93
Final knight value: 338
Final bishop value: 347
Final rook value: 560
Final queen value: 777
Best Error: 0.07074054288122265

Pre-selected !isMate()
Final pawn value: 84
Final knight value: 314
Final bishop value: 322
Final rook value: 521
Final queen value: 721
Best Error: 0.061084428508857154

Select during tuning..my original approach
Final pawn value: 57
Final knight value: 227
Final bishop value: 226
Final rook value: 372
Final queen value: 429
Best Error: 0.07259230564130538

!isMate pre-selection is the best with current positions.