## Question on Texel's Tuning Method

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
odomobo
Posts: 79
Joined: Thu Jul 05, 2018 11:09 pm
Location: Chicago, IL
Full name: Josh Odom

### Re: Question on Texel's Tuning Method

Pio wrote:
Fri Jul 10, 2020 1:43 pm
You should first convert your scores to win probabilities first so the big scores won’t dominate. It is the mean squares of the win probabilities you want to minimise.
Yeah, texel tuning does this via a sigmoid function.

Posts: 769
Joined: Mon Dec 15, 2008 10:45 am

### Re: Question on Texel's Tuning Method

odomobo wrote:
Fri Jul 10, 2020 2:03 pm
Pio wrote:
Fri Jul 10, 2020 1:43 pm
You should first convert your scores to win probabilities first so the big scores won’t dominate. It is the mean squares of the win probabilities you want to minimise.
Yeah, texel tuning does this via a sigmoid function.
It isn't only to get the probabilities but the absolute values will have a different impact on the error:

Code: Select all

``````0.61 (80cp) 0.54 (30cp) 0.07 (error)
0.97 (580cp) 0.95 (530cp) 0.02 (error)
``````
The error value of 50 cp for the sigmoid function is smaller for large scores than for smaller scores.For this reason the tuning considers 80/30 cp errors for "unclear" positions to be removed first than for decisive positions such as 580/530 cp values.
This can be explicitly done by using weights if the tuner uses absolute errors.(i guess...)

hgm
Posts: 25586
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

### Re: Question on Texel's Tuning Method

Yes, that is true. The non-linear correction by the sigmoid is equivalent to weighting the data. E.g. if you apply a correction with a logarithmic tail, it would be equivalent to minimizing the (square of the) relative error. Because DELTA(log(x)) = DELTA(x)/x. Corrections that asymptoticaly go to a constant (which the log doesn't) assign even less weight to the high scores. (E.g. a 1/x tail would weight as 1/(x*x).)

odomobo
Posts: 79
Joined: Thu Jul 05, 2018 11:09 pm
Location: Chicago, IL
Full name: Josh Odom

### Re: Question on Texel's Tuning Method

Fri Jul 10, 2020 3:03 pm
odomobo wrote:
Fri Jul 10, 2020 2:03 pm
Pio wrote:
Fri Jul 10, 2020 1:43 pm
You should first convert your scores to win probabilities first so the big scores won’t dominate. It is the mean squares of the win probabilities you want to minimise.
Yeah, texel tuning does this via a sigmoid function.
It isn't only to get the probabilities but the absolute values will have a different impact on the error:

Code: Select all

``````0.61 (80cp) 0.54 (30cp) 0.07 (error)
0.97 (580cp) 0.95 (530cp) 0.02 (error)
``````
The error value of 50 cp for the sigmoid function is smaller for large scores than for smaller scores.For this reason the tuning considers 80/30 cp errors for "unclear" positions to be removed first than for decisive positions such as 580/530 cp values.
This can be explicitly done by using weights if the tuner uses absolute errors.(i guess...)
What you're saying makes sense. However, the results you showed aren't what I would expect from texel tuning, at least from the algo listed on CPW. Instead, I would expect to see:

Code: Select all

``````1.00, 0.54 (30cp) 0.46 (error)
1.00, 0.95 (530cp) 0.05 (error)
``````

Posts: 769
Joined: Mon Dec 15, 2008 10:45 am

### Re: Question on Texel's Tuning Method

odomobo wrote:
Fri Jul 10, 2020 3:45 pm
Fri Jul 10, 2020 3:03 pm
odomobo wrote:
Fri Jul 10, 2020 2:03 pm
Pio wrote:
Fri Jul 10, 2020 1:43 pm
You should first convert your scores to win probabilities first so the big scores won’t dominate. It is the mean squares of the win probabilities you want to minimise.
Yeah, texel tuning does this via a sigmoid function.
It isn't only to get the probabilities but the absolute values will have a different impact on the error:

Code: Select all

``````0.61 (80cp) 0.54 (30cp) 0.07 (error)
0.97 (580cp) 0.95 (530cp) 0.02 (error)
``````
The error value of 50 cp for the sigmoid function is smaller for large scores than for smaller scores.For this reason the tuning considers 80/30 cp errors for "unclear" positions to be removed first than for decisive positions such as 580/530 cp values.
This can be explicitly done by using weights if the tuner uses absolute errors.(i guess...)
What you're saying makes sense. However, the results you showed aren't what I would expect from texel tuning, at least from the algo listed on CPW. Instead, I would expect to see:

Code: Select all

``````1.00, 0.54 (30cp) 0.46 (error)
1.00, 0.95 (530cp) 0.05 (error)
``````
Ok, using result values (fixed bounds), we cannot generate errors that have the same amount in relation to the absolute score.
But we can have the same error amount for different game results.

Code: Select all

``````general result values are: 1.0 / 0.5 / 0.0

score: +30 cp -> 0.54
1. error to result 0.0: 0.54
2. error to result 1.0: 0.46
3. error to result 0.5: 0.04

score: +530 cp -> 0.95
4. error to result 0.0: 0.95
5. error to result 1.0: 0.05
6. error to result 0.5: 0.45
``````
Now, some thoughts on this:

1. If result of +530 cp produces a dominant error in the test set, e.g. 100 times 0.95 out of 1000 samples, then the real question is: Why does the evaluation give a winning score even though the result is lost. In other words, it is a real error and it is not an incorrect weighting of errors.

2. If we now compare case 3 and 5 and consider the error amount to be the same for discussion, then I see the same question that was asked by you and that I already asked myself. It is clear that the error amount 0.04 on a result 0.5 is more relevant (more influence on the game result) than the error 0.04 on an already decided game (0.95/1.00)

The idea now remains the same and can be worked out differently. For example, errors can be weighted more strongly with the result value 0.5.

3. The idea of error calculation was not new, but the Texel method could not only increase the playing strength in already strong engines, but also
adapt playing styles or simply avoid guessing. I also had these questions we are discussing right now, but I use a different method based on the error calculation.

Instead of storing results in an epd file, I use a "trainer" (any engine) and store a value. The level of this evaluation is scalable and I can also use
different engines for the same data and... let your imagination run free.

Code: Select all

``````example:
2b1r1k1/1pq1bpp1/p1n4p/3n4/2P1p3/1N2B2P/PP2QPP1/3R1BK1 w - - ce -33 acd 3 bm cxd5
``````
So far I have not publicly read that others do the same, although this step is relatively trivial and at least solves the questions you asked initially.
In any case I can calculate absolute errors (e.g. 50cp) implicitly weighted by the simoid function. (as HGM noted)

What do you think