Logarithmic Patterns In Evaluations

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Logarithmic Patterns In Evaluations

Post by D Sceviour »

D Sceviour wrote:
jdart wrote:No, I am not saying hand tune mobility. I am saying, Knight with one square to move to gets one weight, and Knight with two squares to move to gets another weight. Both weights can be auto-tuned. They are independent.
I do not agree they are independent. They follow an average logarithmic pattern of increase.
It should be added further that this is the point of the exercise. Individual tuning of a parameter only tries to make up for the inadequacy of other variables, in spite of Peter Osterlund's claim that the tuning method adapts somewhat for elasticity of values. The result is an uneven curve with occasional spikes and unexplainable values.

By forcing the parameters to follow a smooth curve, other piece values can fit their curve better. The result should be that pieces will not fight with each other for control of space on the board, but adapt with each other to maximize mobility. The final test is whether there is an increase in strength. This cannot ultimately be seen by changing mobility for only one piece, but for all pieces so they can co-ordinate. Also, by forcing a natural logarithmic curve and testing its coefficients, a smaller sample size for the test set should produce a faster convergence.

This method is new (to me) but eventually it should be adaptable to all tables. The next step will be to try passed pawn values.
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Logarithmic Patterns In Evaluations

Post by jdart »

> The result is an uneven curve with occasional spikes and unexplainable values.

This is expectable, because your training data is noisy (you only know one of the 3 possible results for each position, and some of those are bogus, because you might have gotten a draw result from a lost position, for example).

The more training data you have, the less a problem this is.

But also: you really have to look at the results of training. Does it play better, despite what look to you like weird values? If so then you should probably leave it be, and not try to coerce it into values that look ok to you.

--Jon