Yeah, texel tuning does this via a sigmoid function.

## Question on Texel's Tuning Method

**Moderators:** hgm, Dann Corbit, Harvey Williamson

**Forum rules**

This textbox is used to restore diagrams posted with the [d] tag before the upgrade.

### Re: Question on Texel's Tuning Method

### Re: Question on Texel's Tuning Method

It isn't only to get the probabilities but the absolute values will have a different impact on the error:

Code: Select all

```
0.61 (80cp) 0.54 (30cp) 0.07 (error)
0.97 (580cp) 0.95 (530cp) 0.02 (error)
```

This can be explicitly done by using weights if the tuner uses absolute errors.(i guess...)

- hgm
**Posts:**25586**Joined:**Fri Mar 10, 2006 9:06 am**Location:**Amsterdam**Full name:**H G Muller-
**Contact:**

### Re: Question on Texel's Tuning Method

Yes, that is true. The non-linear correction by the sigmoid is equivalent to weighting the data. E.g. if you apply a correction with a logarithmic tail, it would be equivalent to minimizing the (square of the) relative error. Because DELTA(log(x)) = DELTA(x)/x. Corrections that asymptoticaly go to a constant (which the log doesn't) assign even less weight to the high scores. (E.g. a 1/x tail would weight as 1/(x*x).)

### Re: Question on Texel's Tuning Method

What you're saying makes sense. However, the results you showed aren't what I would expect from texel tuning, at least from the algo listed on CPW. Instead, I would expect to see:Desperado wrote: ↑Fri Jul 10, 2020 3:03 pmIt isn't only to get the probabilities but the absolute values will have a different impact on the error:

The error value of 50 cp for the sigmoid function is smaller for large scores than for smaller scores.For this reason the tuning considers 80/30 cp errors for "unclear" positions to be removed first than for decisive positions such as 580/530 cp values.Code: Select all

`0.61 (80cp) 0.54 (30cp) 0.07 (error) 0.97 (580cp) 0.95 (530cp) 0.02 (error)`

This can be explicitly done by using weights if the tuner uses absolute errors.(i guess...)

Code: Select all

```
1.00, 0.54 (30cp) 0.46 (error)
1.00, 0.95 (530cp) 0.05 (error)
```

### Re: Question on Texel's Tuning Method

Ok, using result values (fixed bounds), we cannot generate errors that have the same amount in relation to the absolute score.odomobo wrote: ↑Fri Jul 10, 2020 3:45 pmWhat you're saying makes sense. However, the results you showed aren't what I would expect from texel tuning, at least from the algo listed on CPW. Instead, I would expect to see:Desperado wrote: ↑Fri Jul 10, 2020 3:03 pmIt isn't only to get the probabilities but the absolute values will have a different impact on the error:

The error value of 50 cp for the sigmoid function is smaller for large scores than for smaller scores.For this reason the tuning considers 80/30 cp errors for "unclear" positions to be removed first than for decisive positions such as 580/530 cp values.Code: Select all

`0.61 (80cp) 0.54 (30cp) 0.07 (error) 0.97 (580cp) 0.95 (530cp) 0.02 (error)`

This can be explicitly done by using weights if the tuner uses absolute errors.(i guess...)

Code: Select all

`1.00, 0.54 (30cp) 0.46 (error) 1.00, 0.95 (530cp) 0.05 (error)`

But we can have the same error amount for different game results.

Code: Select all

```
general result values are: 1.0 / 0.5 / 0.0
score: +30 cp -> 0.54
1. error to result 0.0: 0.54
2. error to result 1.0: 0.46
3. error to result 0.5: 0.04
score: +530 cp -> 0.95
4. error to result 0.0: 0.95
5. error to result 1.0: 0.05
6. error to result 0.5: 0.45
```

1. If result of +530 cp produces a dominant error in the test set, e.g. 100 times 0.95 out of 1000 samples, then the real question is: Why does the evaluation give a winning score even though the result is lost. In other words, it is a real error and it is not an incorrect weighting of errors.

2. If we now compare case 3 and 5 and consider the error amount to be the same for discussion, then I see the same question that was asked by you and that I already asked myself. It is clear that the error amount 0.04 on a result 0.5 is more relevant (more influence on the game result) than the error 0.04 on an already decided game (0.95/1.00)

The idea now remains the same and can be worked out differently. For example, errors can be weighted more strongly with the result value 0.5.

3. The idea of error calculation was not new, but the Texel method could not only increase the playing strength in already strong engines, but also

adapt playing styles or simply avoid guessing. I also had these questions we are discussing right now, but I use a different method based on the error calculation.

Instead of storing results in an epd file, I use a "trainer" (any engine) and store a value. The level of this evaluation is scalable and I can also use

different engines for the same data and... let your imagination run free.

Code: Select all

```
example:
2b1r1k1/1pq1bpp1/p1n4p/3n4/2P1p3/1N2B2P/PP2QPP1/3R1BK1 w - - ce -33 acd 3 bm cxd5
```

In any case I can calculate absolute errors (e.g. 50cp) implicitly weighted by the simoid function. (as HGM noted)

What do you think