Page 1 of 2

Cooking Cheese with Texel's tuning method

Posted: Fri Jun 16, 2017 10:41 pm
by Patrice Duhamel
I'm trying to use Texel's tuning method, and I don't understand the results.

Using the same algorithm on the wiki page : https://chessprogramming.wikispaces.com ... ing+Method
I generated 50000 games at 10s+0.1s with Cheese 1.7 vs 1.8, using Adam-Hair-8moves-100133.pgn removing book moves and moves with a high score or mate score, I have 5,5 million positions (46% draw).

I try tuning pieces material values (same value for middle and end game), starting with pawn=100, knight=bishop=325, rook=500, queen=975, and the result after 73 iterations is pawn=94, knight=bishop=325, rook=525, queen=1047.

The problem is bishop and knight value never changed, the values 324 or 326 never reduced the error.

Now if I start with knight=321 and bishop=325 all values are changing and give a different result.
After 143 iterations the result is pawn=99, knight=346, bishop=356, rook=562, queen=1117

Do you think the results are normal ?

How to choose parameters starting values ?

How to choose positions for better results ? (draw rate ? openings ? more positions ? ...)

Re: Cooking Cheese with Texel's tuning method

Posted: Fri Jun 16, 2017 11:50 pm
by petero2
Patrice Duhamel wrote:I'm trying to use Texel's tuning method, and I don't understand the results.

The problem is bishop and knight value never changed, the values 324 or 326 never reduced the error.
I see this too in texel and I think the reason is that I use these values in the SEE algorithm. This means that if they are not equal, an N vs B trade will not be considered an equal trade. This is likely bad when using SEE for pruning or reduction decisions.

The way I deal with this when tuning texel is to only have one parameter that controls both the knight and the bishop value.

Re: Cooking Cheese with Texel's tuning method

Posted: Sat Jun 17, 2017 5:14 am
by Ferdy
Patrice Duhamel wrote:I'm trying to use Texel's tuning method, and I don't understand the results.

Using the same algorithm on the wiki page : https://chessprogramming.wikispaces.com ... ing+Method
I generated 50000 games at 10s+0.1s with Cheese 1.7 vs 1.8, using Adam-Hair-8moves-100133.pgn removing book moves and moves with a high score or mate score, I have 5,5 million positions (46% draw).

I try tuning pieces material values (same value for middle and end game), starting with pawn=100, knight=bishop=325, rook=500, queen=975, and the result after 73 iterations is pawn=94, knight=bishop=325, rook=525, queen=1047.

The problem is bishop and knight value never changed, the values 324 or 326 never reduced the error.

Now if I start with knight=321 and bishop=325 all values are changing and give a different result.
After 143 iterations the result is pawn=99, knight=346, bishop=356, rook=562, queen=1117

Do you think the results are normal ?
I believe this result is normal. It depends on the training positions and the parameters that your try to optimize.
Patrice Duhamel wrote:How to choose parameters starting values ?
Start with default values. You can increment param values by +1, -1, +2, -2, cp's for example.
Patrice Duhamel wrote:How to choose positions for better results ? (draw rate ? openings ? more positions ? ...)
1. Try to have equal or close to equal distribution of draws, wins and loses results.
2. More varied openings
3. More positions
4. Collect positions having more common evaluation features i.e training positions
have passers, piece outpost, closed/open positions for mobility, pins, pawn weaknesses such
as isolated, doubled, backward, rook in open files, rook on 7th ranks and others.
Collect as many as you can for positions having these features.

Re: Cooking Cheese with Texel's tuning method

Posted: Sat Jun 17, 2017 11:29 am
by Patrice Duhamel
petero2 wrote:
Patrice Duhamel wrote:I'm trying to use Texel's tuning method, and I don't understand the results.

The problem is bishop and knight value never changed, the values 324 or 326 never reduced the error.
I see this too in texel and I think the reason is that I use these values in the SEE algorithm. This means that if they are not equal, an N vs B trade will not be considered an equal trade. This is likely bad when using SEE for pruning or reduction decisions.

The way I deal with this when tuning texel is to only have one parameter that controls both the knight and the bishop value.
Is it better to use different values for SEE ?

Re: Cooking Cheese with Texel's tuning method

Posted: Sat Jun 17, 2017 11:31 am
by Patrice Duhamel
Ferdy wrote: 3. More positions
The only limit is the time we can spend on tuning ?

Re: Cooking Cheese with Texel's tuning method

Posted: Sat Jun 17, 2017 1:22 pm
by Ferdy
Patrice Duhamel wrote:
Ferdy wrote: 3. More positions
The only limit is the time we can spend on tuning ?
Not really, the quality of training sets is important, not just the randomly generated position from any game. There can be position that is similar but not an exact duplicate, in which case this could be removed from training sets.

Re: Cooking Cheese with Texel's tuning method

Posted: Sat Jun 17, 2017 2:34 pm
by PK
Rodent uses different values for SEE. Another way would be to declare a capture good if SEE > -25 (or You might want to use a higher number in order to accept R for BP).

Re: Cooking Cheese with Texel's tuning method

Posted: Sat Jun 17, 2017 3:21 pm
by cdani
PK wrote:Rodent uses different values for SEE. Another way would be to declare a capture good if SEE > -25 (or You might want to use a higher number in order to accept R for BP).
Andscacs too.

Re: Cooking Cheese with Texel's tuning method

Posted: Wed Jun 21, 2017 10:58 am
by asanjuan
I don't have time to work in my own engine, so let's help others instead.
:)
1. Try to have equal or close to equal distribution of draws, wins and loses results.
2. More varied openings
3. More positions
4. Collect positions having more common evaluation features i.e training positions
have passers, piece outpost, closed/open positions for mobility, pins, pawn weaknesses such
as isolated, doubled, backward, rook in open files, rook on 7th ranks and others.
Collect as many as you can for positions having these features.

The way that I achieve that is collecting games where one side is stronger (say 50 elo) than the other. It narrows the draw percentage, and you can consider them as real draws because the stronger side is not able to win it against a weaker player.

The easiest way to collect them is to play a match between your engine against itself doubling the time control. For example: 40/1 second vs 40/2 second games. It's easy to do int with cute-chess-cli.
The resulting games will contain outposts, and smooth positional advantages that would be converted into a material advantage, and then, into a won endgame.

Also, try to not adjudicate the game so they will be played until the checkmate. The learning algorithm will learn how to convert a material advantage and will learn, for instance, how important is the king security in endgames.

Another thing that I do is, in the learning process, set different deltas for parameter increments. For example:

set delta = 100 and tune the whole parameter set setting param + - delta (helps to find the queen and rook values very fast)
Set delta = 10 and tune all parameters.
then set delta = 5 and tune all parameters.
set delta = 1 and tune all parameters.

In my experience, the learning search time is greatly reduced.

Re: Cooking Cheese with Texel's tuning method

Posted: Wed Jun 21, 2017 4:15 pm
by jdart
set delta = 100 and tune the whole parameter set setting param + - delta (helps to find the queen and rook values very fast)
Set delta = 10 and tune all parameters.
then set delta = 5 and tune all parameters.
set delta = 1 and tune all parameters.


Using a decreasing delta can be completely automated, for example by using Adagrad (https://xcorr.net/2014/01/23/adagrad-el ... t-descent/), which is more efficient than the method described on the cpw page.

Some people have also used methods such as https://en.wikipedia.org/wiki/BOBYQA or L-BFGS (https://en.wikipedia.org/wiki/Limited-memory_BFGS), which are available in many optimization libraries. These require approximating the gradient and the Hessian (2nd derivative), which is expensive, but on the other hand they converge fast.

--Jon