## Texel tuning speed

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Joost Buijs
Posts: 1014
Joined: Thu Jul 16, 2009 8:47 am
Location: Almere, The Netherlands

### Re: Texel tuning speed

Sven wrote:
Thu Aug 30, 2018 9:52 pm
xr_a_y wrote:
Wed Aug 29, 2018 10:16 pm
I seems Weini is able to run the qsearch needed for each position and each evaluation of the error in around 0.06 millisecond.

Let's say I have only 100 000 positions and want to optimize 10 parameters.
It will requiere let's say at least 100 000 x 10 x 100 qsearch, so 2h40min of computation.
Do you mean *one* qsearch() call takes 0.06 ms? That would be quite a lot, I don't think that is what you mean. On the other hand, 100,000 qsearch() calls plus one error calculation should also take much longer than 0.06 ms.
Indeed, these numbers seem a bit weird. It is a long time ago since I measured it but the evaluation function of my engine takes ~700 processor cycles, roughly 175 nS. at 4GHz. Most of the time quiescence takes < 1 uS. I don't think that for Weini this will be very different.

mar
Posts: 2088
Joined: Fri Nov 26, 2010 1:00 pm
Location: Czech Republic
Full name: Martin Sedlak

### Re: Texel tuning speed

Sven wrote:
Thu Aug 30, 2018 9:33 pm
I do not understand how an eval cache can help to speed up texel tuning.
It depends on what positions you use, since I extracted the positions from actual self-play games, they weren't actually "random" positions but rather naturally sorted as the individual games progressed, that's why eval cache helped a lot in my case.
Martin Sedlak

xr_a_y
Posts: 988
Joined: Sat Nov 25, 2017 1:28 pm
Location: France

### Re: Texel tuning speed

Joost Buijs wrote:
Fri Aug 31, 2018 5:43 am
Sven wrote:
Thu Aug 30, 2018 9:52 pm
xr_a_y wrote:
Wed Aug 29, 2018 10:16 pm
I seems Weini is able to run the qsearch needed for each position and each evaluation of the error in around 0.06 millisecond.

Let's say I have only 100 000 positions and want to optimize 10 parameters.
It will requiere let's say at least 100 000 x 10 x 100 qsearch, so 2h40min of computation.
Do you mean *one* qsearch() call takes 0.06 ms? That would be quite a lot, I don't think that is what you mean. On the other hand, 100,000 qsearch() calls plus one error calculation should also take much longer than 0.06 ms.
Indeed, these numbers seem a bit weird. It is a long time ago since I measured it but the evaluation function of my engine takes ~700 processor cycles, roughly 175 nS. at 4GHz. Most of the time quiescence takes < 1 uS. I don't think that for Weini this will be very different.
Maybe I am doing something wrong but I compute $E = 1/N \sum_i R_i - S_i$ where S_i is the sigmoid that depend on scoring the ith position. So I run a qsearch for each i and each E computation. And I compute E quite often using the single minimal finding algorithm given on the wiki page about Texel Tuning.

Doing so I can easily measure the time requiere for computing E once and thus the mean time needed for one qsearch. And for now Weini seems quite slow at this. But i don't call qsearch directly, there is some context to initialize around each qsearch that may explain the slow down.

I'll be back with more timing soon.

xr_a_y
Posts: 988
Joined: Sat Nov 25, 2017 1:28 pm
Location: France

### Re: Texel tuning speed

ok getting ride of some useless context, I now get 0.01ms (10us) for the mean qsearch call. Given your comments, this is still too much ...

xr_a_y
Posts: 988
Joined: Sat Nov 25, 2017 1:28 pm
Location: France

### Re: Texel tuning speed

Sven wrote:
Thu Aug 30, 2018 9:52 pm
Therefore my question is: how often do you calculate the error function, once per training set and per set of parameter values (as it is intended), or once per position (which I would not understand)?
I run one qsearch per position for each error computation.
I compute E once for each set of parameters.

xr_a_y
Posts: 988
Joined: Sat Nov 25, 2017 1:28 pm
Location: France

### Re: Texel tuning speed

Weini classic search is currently running at only 400knps single thread.

Which mean around 0.0025ms per evaluation.

One call to qsearch often leads to 5, 10 call to evaluation, so 0.01ms per qsearch seems at least coherent with Weini speed.

Weini is not using bitboards for move generation and many evaluation terms are still based on a mailbox data structure.

I see other engines running at 2Mnps on the same engine, this is 5 times better than Weini, maybe bitboard move generation and evaluation shall be the next step forward ...

Ronald
Posts: 95
Joined: Tue Jan 23, 2018 9:18 am
Location: Rotterdam
Full name: Ronald Friederich
Contact:

### Re: Texel tuning speed

Joost Buijs wrote:
Thu Aug 30, 2018 7:57 am
In the training set not every position has to be unique and different from each other, a position that can be won in game A can be drawn or lost in game B, and this all averages out in the error function. When you train with unique positions only you have no statistical info whatsoever.
I think every position needs to be unique in the set otherwise the tuning will be much less effective. Worst case is 3 times the same position with 3 different outcomes. This is a waste of time because the total error over the 3 positions will be minimal when the eval is 0 (and thus probably all your parameters 0). 2 times the same position depends on the different outcomes Win-Lose also will draw eval to 0, Draw-Win/Lose will blurr the result. Most important in every position is that you get the result right.

Joost Buijs
Posts: 1014
Joined: Thu Jul 16, 2009 8:47 am
Location: Almere, The Netherlands

### Re: Texel tuning speed

Ronald wrote:
Fri Aug 31, 2018 9:23 am
Joost Buijs wrote:
Thu Aug 30, 2018 7:57 am
In the training set not every position has to be unique and different from each other, a position that can be won in game A can be drawn or lost in game B, and this all averages out in the error function. When you train with unique positions only you have no statistical info whatsoever.
I think every position needs to be unique in the set otherwise the tuning will be much less effective. Worst case is 3 times the same position with 3 different outcomes. This is a waste of time because the total error over the 3 positions will be minimal when the eval is 0 (and thus probably all your parameters 0). 2 times the same position depends on the different outcomes Win-Lose also will draw eval to 0, Draw-Win/Lose will blurr the result. Most important in every position is that you get the result right.
Well, I clearly have a different view. Of course there are positions that are clearly won or lost but for most positions this in not clear at all (otherwise chess would be solved) and then statistics will come into play. Everybody is entitled to do it in his own way of course.

Robert Pope
Posts: 514
Joined: Sat Mar 25, 2006 7:27 pm

### Re: Texel tuning speed

Joost Buijs wrote:
Fri Aug 31, 2018 2:56 pm
Ronald wrote:
Fri Aug 31, 2018 9:23 am
Joost Buijs wrote:
Thu Aug 30, 2018 7:57 am
In the training set not every position has to be unique and different from each other, a position that can be won in game A can be drawn or lost in game B, and this all averages out in the error function. When you train with unique positions only you have no statistical info whatsoever.
I think every position needs to be unique in the set otherwise the tuning will be much less effective. Worst case is 3 times the same position with 3 different outcomes. This is a waste of time because the total error over the 3 positions will be minimal when the eval is 0 (and thus probably all your parameters 0). 2 times the same position depends on the different outcomes Win-Lose also will draw eval to 0, Draw-Win/Lose will blurr the result. Most important in every position is that you get the result right.
Well, I clearly have a different view. Of course there are positions that are clearly won or lost but for most positions this in not clear at all (otherwise chess would be solved) and then statistics will come into play. Everybody is entitled to do it in his own way of course.
I think the question is, which do you gain more information from? e.g. the same position from 10 different games, 6 of which are wins, or 10 unique positions, 6 of which are wins? Taken to the extreme, 200,000 instances of the same position would be awful for training, though it would give you a better measure of that position's win probability. We are learning based on both the board layout and the game score, so the more (realistic) variety we have in each, the better the training would be expected to go.

In practice, you are probably better off training on 200,000 positions, some of which appear multiple times, than filtering on unique positions and training on 190,000 positions.

Joost Buijs
Posts: 1014
Joined: Thu Jul 16, 2009 8:47 am
Location: Almere, The Netherlands

### Re: Texel tuning speed

Robert Pope wrote:
Fri Aug 31, 2018 7:16 pm
Joost Buijs wrote:
Fri Aug 31, 2018 2:56 pm
Ronald wrote:
Fri Aug 31, 2018 9:23 am
Joost Buijs wrote:
Thu Aug 30, 2018 7:57 am
In the training set not every position has to be unique and different from each other, a position that can be won in game A can be drawn or lost in game B, and this all averages out in the error function. When you train with unique positions only you have no statistical info whatsoever.
I think every position needs to be unique in the set otherwise the tuning will be much less effective. Worst case is 3 times the same position with 3 different outcomes. This is a waste of time because the total error over the 3 positions will be minimal when the eval is 0 (and thus probably all your parameters 0). 2 times the same position depends on the different outcomes Win-Lose also will draw eval to 0, Draw-Win/Lose will blurr the result. Most important in every position is that you get the result right.
Well, I clearly have a different view. Of course there are positions that are clearly won or lost but for most positions this in not clear at all (otherwise chess would be solved) and then statistics will come into play. Everybody is entitled to do it in his own way of course.
I think the question is, which do you gain more information from? e.g. the same position from 10 different games, 6 of which are wins, or 10 unique positions, 6 of which are wins? Taken to the extreme, 200,000 instances of the same position would be awful for training, though it would give you a better measure of that position's win probability. We are learning based on both the board layout and the game score, so the more (realistic) variety we have in each, the better the training would be expected to go.

In practice, you are probably better off training on 200,000 positions, some of which appear multiple times, than filtering on unique positions and training on 190,000 positions.
Indeed, and this is why I am thinking to filter the positions in such a way that I keep track on the WLD score. In fact exactly what my book-generator does, just a binary three with hashes and scores, the only thing that I have to change is to add the full position to a node, and to output the position instead of the hash, either being it in binary or epd format.