Troubles with Texel Tuning

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Troubles with Texel Tuning

Post by Daniel Shawul »

AlvaroBegue wrote:
Daniel Shawul wrote:Using a first order method such as the conjugate-gradient method seem to suffer from the same problem. I think there is something in the way mini batches are processed in SGD that i am not doing here...
I think of conjugate-gradient as a second order method. I am talking about simpler algorithms than that. Adam is very popular these days (perhaps because it's the default learning algorithm in TensorFlow), but even plain gradient descent works well.
Ah! scipy do not anything like the steepest descent algorithm that is first order, i will test those later.
AndrewGrant
Posts: 1754
Joined: Tue Apr 19, 2016 6:08 am
Location: U.S.A
Full name: Andrew Grant

Re: Troubles with Texel Tuning

Post by AndrewGrant »

Here is what I get from your positions:

Code: Select all

Material : {  66,  106}, { 259,  222}, { 245,  220}, { 284,  421}, { 446,  802}, {   0,    0}, 

PawnPSQT : 
{   0,    0}, {   0,    0}, {   0,    0}, {   0,    0},
{ -10,   -9}, {  -3,   -7}, {   3,   -9}, {  -3,   -4},
{  -8,   -9}, {  -2,   -8}, {  -2,  -10}, {   0,   -4},
{ -11,   -8}, {  -9,   -6}, {  -3,  -10}, {  -2,  -12},
{  -4,   -1}, {  -3,    0}, {  -3,   -7}, {  -3,  -12},
{   9,    6}, {  14,    6}, {   4,    1}, {   9,   -2},
{  34,   52}, {  24,   57}, {  14,   55}, {  24,   52},
{   0,    0}, {   0,    0}, {   0,    0}, {   0,    0},

KnightPSQT:
{ -33,  -26}, { -25,  -17}, { -14,  -11}, { -11,   -3},
{ -25,  -18}, { -17,    2}, {  22,    5}, {  12,   15},
{ -18,   -5}, {   8,   13}, {  20,   16}, {  26,   41},
{  -8,  -13}, {  18,   15}, {  32,   29}, {  34,   34},
{   8,    2}, {  30,   17}, {  38,   28}, {  50,   33},
{   3,    5}, {  18,   11}, {  23,   30}, {  60,   30},
{   3,    1}, {  13,    0}, {  16,   17}, {  19,   26},
{ -38,  -43}, {  -7,  -14}, {   0,    1}, {  -2,   -3},

Have the other PSQTs...

Isolated Pawn :  { -10,   -7},

Stacked Pawn : {   0,  -19},

Mobilty Knight:
 { -17,  -28}, { -15,  -37}, { -11,  -34},
 {  -7,  -38}, { -16,  -32}, { -17,  -31},
 { -29,   -3}, {   0,    0}, {   0,    0},

Mobility Bishop:
{ -15,  -10}, { -28,  -14}, {  -8,   -9}, {   4,    8},
{  11,   16}, {  20,   18}, {  27,   24}, {  30,   27},
{  35,   31}, {  38,   28}, {  41,   36}, {  51,   18},
{  24,   40}, {  13,    5},

Mobility Rook:
 {  -6,   -2}, { -32,  -12}, {   7,  -14},  {  15,   -3}, {  25,    9},
 {  26,   13},{  22,   29}, {  24,   31}, {  27,   37}, {  29,   42},
 {  32,   47}, {  34,   52}, {  34,   59},  {  28,   65}, {  13,   64},

Mobility Queen:
 {   0,    0}, {   0,    0}, {  -1,    0}, {  -6,   -1}, { -20,   -5}, { -19,   -2}, { -14,    0},
 {  -1,    3}, {   0,   14},{   7,   13},  {  12,   23}, {  16,   24}, {  19,   29}, {  24,   29}, 
 {  24,   38}, {  29,   42}, {  35,   43}, {  34,   54}, {  37,   58}, {  38,   68}, {  41,   73},
 {  48,   73}, {  43,   67}, {  40,   63}, {  29,   46}, {  17,   28}, {   6,    9},{   3,    6}, 
Tests with these values (generated with all params starting at 0) showed a loss of ~30 elo. The values I got started from my current ones also showed a loss of about ~20 ELO.

Tuning seems futile at this point
#WeAreAllDraude #JusticeForDraude #RememberDraude #LeptirBigUltra
"Those who can't do, clone instead" - Eduard ( A real life friend, not this forum's Eduard )
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Troubles with Texel Tuning

Post by Evert »

Some obvious issues with those values. The MG values for the Rook and Queen are far too low (I had the same issue, it goes away with more knowledge) and the value of the minors goes down in the end game (which leads to bad play). The mobility tables look very noisy, you may want to verify that each value is at least represented in the test positions (I suspect the high Queen mobility entries are not represented, for instance). A better idea than tuning individual values is probably to tune a formula that builds up the table.
Do you have passed pawn evaluation and king safety?
AndrewGrant
Posts: 1754
Joined: Tue Apr 19, 2016 6:08 am
Location: U.S.A
Full name: Andrew Grant

Re: Troubles with Texel Tuning

Post by AndrewGrant »

Do you have passed pawn evaluation and king safety?
Yes and Yes. I tuned them as well. Every param in the evaluation is in the tuner.
#WeAreAllDraude #JusticeForDraude #RememberDraude #LeptirBigUltra
"Those who can't do, clone instead" - Eduard ( A real life friend, not this forum's Eduard )
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Troubles with Texel Tuning

Post by Daniel Shawul »

AlvaroBegue wrote:
Daniel Shawul wrote:Using a first order method such as the conjugate-gradient method seem to suffer from the same problem. I think there is something in the way mini batches are processed in SGD that i am not doing here...
I think of conjugate-gradient as a second order method. I am talking about simpler algorithms than that. Adam is very popular these days (perhaps because it's the default learning algorithm in TensorFlow), but even plain gradient descent works well.
I tried the first order steepest descent algorithm now which is not available in scipy. With a very high learning rate (alpha) it converges to good piece values quickly, however, it still can not handle the sampling scheme I had. One reason could be SGD requres a much lower learning rate so i have to decrease that. But I am more inclined to belive that the sampling scheme i have is flawed. To compute the gradient i need f(x) and f(x+delta) but those two are computed on two different samples. The idea of the SGD is to compute the gradient from a mini-batch (both f(x) and f(x+delta) from same mini-batch) and use that for update as if it was computed from the whole dataset.

Code: Select all

Engine&#40;1506885786&#41; <<< mse 0.01 0 0.00575646273249
Engine&#40;1506885786&#41; >>> 0.0658148119181500
Engine&#40;1506885786&#41; <<< 
QUEEN_MG 921.995596819
ROOK_MG 526.493363712
BISHOP_MG 358.684044774
KNIGHT_MG 334.052506437
PAWN_MG 113.445753437
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: Troubles with Texel Tuning

Post by AlvaroBegue »

Daniel Shawul wrote:To compute the gradient i need f(x) and f(x+delta) but those two are computed on two different samples. The idea of the SGD is to compute the gradient from a mini-batch (both f(x) and f(x+delta) from same mini-batch) and use that for update as if it was computed from the whole dataset.
Why are you not computing the gradient directly, instead of doing this f(x) and f(x+delta) business?
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Troubles with Texel Tuning

Post by Daniel Shawul »

AlvaroBegue wrote:
Daniel Shawul wrote:To compute the gradient i need f(x) and f(x+delta) but those two are computed on two different samples. The idea of the SGD is to compute the gradient from a mini-batch (both f(x) and f(x+delta) from same mini-batch) and use that for update as if it was computed from the whole dataset.
Why are you not computing the gradient directly, instead of doing this f(x) and f(x+delta) business?
It needs more work but i guess it would be comparable to modifying my eval to work with double precision. Infact I actually started incorporating the three header files and modifying my eval to accept a template argument but it got pretty messy quickly. However, it could be a good play ground for testing individual evaluation terms.

The stochastic issue is solved after i provided the finite difference gradient functor to the optimizer, in which i used the same random number seeds for f(x) and f(x+delta). This worked for the first order method i.e. gradient descent, but it didn't work for the second order methods CG and BFGS as expected.

With SDG

Code: Select all

Engine&#40;1506891302&#41; <<< mse 0.01 21788 0.00575646273249
Engine&#40;1506891303&#41; >>> 0.0670008220841582
Engine&#40;1506891303&#41; <<< 
QUEEN_MG 903.193388403
ROOK_MG 523.035474633
BISHOP_MG 354.517504783
KNIGHT_MG 334.146614028
PAWN_MG 113.134921652
This might not still be a real SGD though as the samples may overlap in my scheme.

Daniel
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Troubles with Texel Tuning

Post by D Sceviour »

AndrewGrant wrote:Tuning seems futile at this point
I got 0 elo change out of tuning. The tuning exercise did expose some gross errors in the evaluator that would have continued to go undetected without going over each of the variables. For example, I used to have a value for Kaufman's redundant rooks. Texel's tuning did not like this and returned unstable results from -500 to 500. Perhaps the formula was incorrect, but it was deleted anyway. The king safety calculation was completely re-written. In many other cases, Texel's tuning verified the optimal value the first time!

Every program already finds its own balance of evaluation with intuitive hand tuning. Overall, the additional tuning exercise changed the style of play but not the strength. It seems more predictable now. My program is less likely to beat stronger engines with wild attacks. On the other hand, the score is more stable against weaker programs.