Page 1 of 7

Texel tuning method question

Posted: Mon Jun 05, 2017 6:13 pm
by sandermvdb
I just started to tune my evaluation parameters using the Texel tuning method. I am using the q-search score but the problem with this is that calculating the error for my testset (4 million positions) takes about 20 seconds.
The reason that this takes so long is that about 16 million positions are actually evaluated (q-search searches for attacks and check evasions). I skip bad captures in the q-search, but calculating the SEE-score of course also takes time.

Some people use a quiet set for this tuning method. Is this the reason why?

Re: Texel tuning method question

Posted: Mon Jun 05, 2017 6:16 pm
by ZirconiumX
sandermvdb wrote:I just started to tune my evaluation parameters using the Texel tuning method. I am using the q-search score but the problem with this is that calculating the error for my testset (4 million positions) takes about 20 seconds.
The reason that this takes so long is that about 16 million positions are actually evaluated (q-search searches for attacks and check evasions). I skip bad captures in the q-search, but calculating the SEE-score of course also takes time.

Some people use a quiet set for this tuning method. Is this the reason why?
I tried the Gaviota method of using the straight eval score rather than QS. It was much faster. Whether it was better is hard to say, because Texel tuning was a loss for me.

Re: Texel tuning method question

Posted: Mon Jun 05, 2017 9:43 pm
by Ferdy
sandermvdb wrote:I just started to tune my evaluation parameters using the Texel tuning method. I am using the q-search score but the problem with this is that calculating the error for my testset (4 million positions) takes about 20 seconds.
The reason that this takes so long is that about 16 million positions are actually evaluated (q-search searches for attacks and check evasions). I skip bad captures in the q-search, but calculating the SEE-score of course also takes time.

Some people use a quiet set for this tuning method. Is this the reason why?
Texel tuning is not about a contest of doing it fast. 4 million in 20s is fine. You better worry on the diversity of the training positions that you use.

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 12:51 am
by jdart
Right. 20 seconds is fast. Takes me maybe 10 minutes (on a big 24-core machine) but I am using a 2-ply search. I calculate the PV, and then do gradient descent based on the end-of-PV evals. Then periodically I re-calculate the PV as the parameters are tuned.

--Jon

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 6:00 am
by sandermvdb
So I guess using the local optimization method that is described on the cpw is not the way to go. Tuning ~400 parameters would take years!

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 8:03 am
by PK
Then be selective. Tune all the material values and see if it helps. After a couple of runs I got the following. The trend was to increase pawn and rook value in the endgame.

Code: Select all

    values[P_MID] = 95;   // 95
    values[N_MID] = 310;  // 310
    values[B_MID] = 322;  // 320
    values[R_MID] = 514;  // 515
    values[Q_MID] = 1000;

    values[P_END] = 110;  // 106
    values[N_END] = 305;  // 305
    values[B_END] = 320;  // 320
    values[R_END] = 527;  // 520
    values[Q_END] = 1012; // 1010

    // Material adjustments

    values[B_PAIR]  = 51;
    values[N_PAIR]  = -9;
    values[R_PAIR]  = -9;
    values[ELEPH]  = 4;  // queen loses that much with each enemy minor on the board
    values[A_EXC]  = 29; // exchange advantage additional bonus
    values[A_MIN] = 53;  // additional bonus for minor piece advantage
    values[A_MAJ] = 60;  // additional bonus for major piece advantage
    values[A_TWO] = 44;  // additional bonus for two minors for a rook
    values[A_ALL] = 80;  // additional bonus for advantage in both majors and minors
    values[N_CL]  = 7;   // knight gains this much with each own pawn present on th board
values[R_OP] = 3; // rook loses that much with each own pawn present on the board

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 8:56 am
by sasachess
I divided the tuning procedure into two steps:
1. Scrapping of unhelpful positions (initial position, less than 7 pieces, king in check, Eval! = Quiesce, mate score)
2. parameter tuning

The first step is executed only the first time, given an input.epd file with EPD positions and final result, produces selected.epd with selected positions and skipped.csv with skipped positions.

The second step starts with selected.epd and produces tuned.csv with the tuned parameters.

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 4:03 pm
by Ferdy
jdart wrote:Right. 20 seconds is fast. Takes me maybe 10 minutes (on a big 24-core machine) but I am using a 2-ply search. I calculate the PV, and then do gradient descent based on the end-of-PV evals. Then periodically I re-calculate the PV as the parameters are tuned.

--Jon
Interesting, by "end-of-PV evals", did you use your static evaluation function to get the eval at end of the pv position?

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 5:43 pm
by Desperado
Ferdy wrote:
jdart wrote:Right. 20 seconds is fast. Takes me maybe 10 minutes (on a big 24-core machine) but I am using a 2-ply search. I calculate the PV, and then do gradient descent based on the end-of-PV evals. Then periodically I re-calculate the PV as the parameters are tuned.

--Jon
Interesting, by "end-of-PV evals", did you use your static evaluation function to get the eval at end of the pv position?
Maybe i should think about it twice, but the pv eval should be passed to the root as search result. So at first glance i don't know in what way the "eval at the end of the pv" is different to the search result score. :?: :!:

And isn't any line (including the pv of course) computed by the static evaluation at the final node ?!

So, what do i miss ?

Re: Texel tuning method question

Posted: Tue Jun 06, 2017 6:18 pm
by AlvaroBegue
Desperado wrote:
Ferdy wrote:
jdart wrote:Right. 20 seconds is fast. Takes me maybe 10 minutes (on a big 24-core machine) but I am using a 2-ply search. I calculate the PV, and then do gradient descent based on the end-of-PV evals. Then periodically I re-calculate the PV as the parameters are tuned.

--Jon
Interesting, by "end-of-PV evals", did you use your static evaluation function to get the eval at end of the pv position?
Maybe i should think about it twice, but the pv eval should be passed to the root as search result. So at first glance i don't know in what way the "eval at the end of the pv" is different to the search result score. :?: :!:

And isn't any line (including the pv of course) computed by the static evaluation at the final node ?!

So, what do i miss ?
The trick is doing the gradient descent. While it would be possible to do it on the search function itself, it would be hard to make that efficient. So instead, you need to recover what position gave the eval that was propagated to the root, and then compute the gradient of the evaluation function at that node.