In my engine I use SSE variables to store the 2 terms of evalutation [middlegame; endgame] and at the end of the evalutation I do a very common tapered evalutation.
The SSE variable I use can contain up to 4 integer evalutation terms since it's 128 bit long.
I'm trying to understand how can I use those 2 free slot to make a better eval function.
I thought to use the four slot for [middlegame Open, middleGame Closed, endgame Open, endgame Closed] but I never started to code it.
Any idea I can try?
advanced tapered evalutation
Moderators: hgm, Rebel, chrisw
-
- Posts: 1600
- Joined: Mon Feb 21, 2011 9:48 am
Re: advanced tapered evalutation
You used before (at least) Agners library (never gave me a great result in Bouquet).
Your idea is good, but very difficult, I used other forms of making, by counting material, ect.
In Bouquet 1.4 comes to have 8 different evals, it was just a mess.
I think two is good enough,
the atypical situations simply evaluate them apart, but within these two primary forms
Your idea is good, but very difficult, I used other forms of making, by counting material, ect.
In Bouquet 1.4 comes to have 8 different evals, it was just a mess.
I think two is good enough,
the atypical situations simply evaluate them apart, but within these two primary forms
-
- Posts: 855
- Joined: Sun May 23, 2010 1:32 pm
Re: advanced tapered evalutation
yes I use Anger's and it's giving me good results. I also tried gcc ones but they looks like slower. What I know today is that I don't use 2 free eval term and I was guessing if I can try to use them for free
-
- Posts: 334
- Joined: Sat Feb 25, 2012 10:42 pm
- Location: Stockholm
Re: advanced tapered evalutation
Hi Marco!
Why not try my idea about risk adjustment in evaluation (see http://www.talkchess.com/forum/viewtopi ... w=&start=0).
You could have the risk adjustment tapered exactly like the normal evaluation.
I do not know the best way to determine the risk but one way to determine the risk might be to take a huge set of chess positions and comparing the results of the positions evaluations with a depth x search evaluation of the same positions. Lets say that you evaluate position p_static from white points of view as p_static = (c_white_1*x_1 + c_white_2*x_2 + ... + c_white_n*x_n) - (c_black_1*x_1 + c_black_2*x_2 + ... + c_black_n*x_n) where x_1, x_2, ... , x_n denotes your n evaluation features and c_white_i describes the value of feature i from white points of view. position p_static is also evaluated with a depth x search denoted by p_x.
You could then determine how much the different features have contributed to the difference of the win probability between the static score of the position and the depth x score of the position as follows.
LHS (Left Hand Side) = abs(probability_white_winning(p) - probability_white_winning(p_x)) + abs(probability_black_winning(p) - probability_black_winning(p_x)) = (abs(c_white_1) + abs(c_black_1))*x_1 + (abs(c_white_2) + abs(c_black_2))*x_2 + ... + (abs(c_white_n) + abs(c_black_n))*x_n = RH (Right Hand Side), where probability_white_winning(p_s), probability_black_winning(p_s) are functions determining the probability in the interval [0, 1] of winning given the position's score p_s. Why do I use winning probability and not the position's evaluation directly? The reason is that going from a static score of 900 cp advantage for white to a depth x search evaluation of 1000 cp advantage does not matter so much since you will most probably win in either case. Going from 0 cp to 100 cp does matter though.
If you do the same for all the positions in your set, where the number of positions >> n, you could do least square fitting to determine x_1*, x_2* ... x_n* ((x_1*, x_2*, ... x_n*) is the solution vector). When you have determined x_1*, x_2* ... x_n*, you can see how much the probability of winning fluctuates for the different features. High value for feature i (x_i*) says that the feature is easily overestimated or underestimated, i.e x_i* is a measure of how risky the evaluation feature is.
If you want to do the risk evaluation dependent on game stage you could multiply the LHS of the equation with the size/amount of the game stage and do two least square fittings.
I have never tried this myself so I cannot say it works.
Good luck!
/Pio
Why not try my idea about risk adjustment in evaluation (see http://www.talkchess.com/forum/viewtopi ... w=&start=0).
You could have the risk adjustment tapered exactly like the normal evaluation.
I do not know the best way to determine the risk but one way to determine the risk might be to take a huge set of chess positions and comparing the results of the positions evaluations with a depth x search evaluation of the same positions. Lets say that you evaluate position p_static from white points of view as p_static = (c_white_1*x_1 + c_white_2*x_2 + ... + c_white_n*x_n) - (c_black_1*x_1 + c_black_2*x_2 + ... + c_black_n*x_n) where x_1, x_2, ... , x_n denotes your n evaluation features and c_white_i describes the value of feature i from white points of view. position p_static is also evaluated with a depth x search denoted by p_x.
You could then determine how much the different features have contributed to the difference of the win probability between the static score of the position and the depth x score of the position as follows.
LHS (Left Hand Side) = abs(probability_white_winning(p) - probability_white_winning(p_x)) + abs(probability_black_winning(p) - probability_black_winning(p_x)) = (abs(c_white_1) + abs(c_black_1))*x_1 + (abs(c_white_2) + abs(c_black_2))*x_2 + ... + (abs(c_white_n) + abs(c_black_n))*x_n = RH (Right Hand Side), where probability_white_winning(p_s), probability_black_winning(p_s) are functions determining the probability in the interval [0, 1] of winning given the position's score p_s. Why do I use winning probability and not the position's evaluation directly? The reason is that going from a static score of 900 cp advantage for white to a depth x search evaluation of 1000 cp advantage does not matter so much since you will most probably win in either case. Going from 0 cp to 100 cp does matter though.
If you do the same for all the positions in your set, where the number of positions >> n, you could do least square fitting to determine x_1*, x_2* ... x_n* ((x_1*, x_2*, ... x_n*) is the solution vector). When you have determined x_1*, x_2* ... x_n*, you can see how much the probability of winning fluctuates for the different features. High value for feature i (x_i*) says that the feature is easily overestimated or underestimated, i.e x_i* is a measure of how risky the evaluation feature is.
If you want to do the risk evaluation dependent on game stage you could multiply the LHS of the equation with the size/amount of the game stage and do two least square fittings.
I have never tried this myself so I cannot say it works.
Good luck!
/Pio
-
- Posts: 855
- Joined: Sun May 23, 2010 1:32 pm
Re: advanced tapered evalutation
if I had understood the content of your link, you are talking about different evalutation when you are ahead/behind of material. am I right?
-
- Posts: 334
- Joined: Sat Feb 25, 2012 10:42 pm
- Location: Stockholm
Re: advanced tapered evalutation
Hi Marco!
Yes you are right. The idea is that when you are already winning you do not need to take so big risks but when you are loosing you should take risks that might pay off.
I guess that material is the most stable evaluation term and the most risky evaluation terms are probably king safety, passed pawns and very low mobility for the pieces. I think it is better to gather statistics on what evaluation terms are the most risky than to listen to my advice . I suggested one way of how to gather that type of statistics but I do not know if my proposal was the best.
If you think about it, my idea resembles the way humans think when searching for a good move.
You do not need to win in the most spectacular way
/Pio
Yes you are right. The idea is that when you are already winning you do not need to take so big risks but when you are loosing you should take risks that might pay off.
I guess that material is the most stable evaluation term and the most risky evaluation terms are probably king safety, passed pawns and very low mobility for the pieces. I think it is better to gather statistics on what evaluation terms are the most risky than to listen to my advice . I suggested one way of how to gather that type of statistics but I do not know if my proposal was the best.
If you think about it, my idea resembles the way humans think when searching for a good move.
You do not need to win in the most spectacular way
/Pio
-
- Posts: 855
- Joined: Sun May 23, 2010 1:32 pm
Re: advanced tapered evalutation
I think the problem could that today we calculate eval as a sum of indipendent terms, nad give some of them big magnitude ( king safety) to be sure that when the king is in trouble we don't care about an isolated pawn too much.
maybe we can try to not have a "linear" evalutation
maybe we can try to not have a "linear" evalutation
-
- Posts: 334
- Joined: Sat Feb 25, 2012 10:42 pm
- Location: Stockholm
Re: advanced tapered evalutation
Hi Marco!
I think you are right and that is why I proposed in http://www.talkchess.com/forum/viewtopi ... 37&t=48644 that you should not just sum up the different evaluation features.
My proposal on how to sum up the features will lead to exactly what you described making a non-linear evaluation.
The big problem with today's way of not trying to see risks is for example when you have a single passed pawn. The single passed pawn might be the winning feature, but it might also be lost and worth nothing. Giving the passed pawn a big bonus makes your engine trying to defend that pawn and might even make you sacrifice pieces for that pawn even though you are maybe two pawns ahead and really do not to focus all you attention to the passed pawn.
/Pio
I think you are right and that is why I proposed in http://www.talkchess.com/forum/viewtopi ... 37&t=48644 that you should not just sum up the different evaluation features.
My proposal on how to sum up the features will lead to exactly what you described making a non-linear evaluation.
The big problem with today's way of not trying to see risks is for example when you have a single passed pawn. The single passed pawn might be the winning feature, but it might also be lost and worth nothing. Giving the passed pawn a big bonus makes your engine trying to defend that pawn and might even make you sacrifice pieces for that pawn even though you are maybe two pawns ahead and really do not to focus all you attention to the passed pawn.
/Pio
-
- Posts: 855
- Joined: Sun May 23, 2010 1:32 pm
Re: advanced tapered evalutation
it's really hard to create a good eval function
-
- Posts: 334
- Joined: Sat Feb 25, 2012 10:42 pm
- Location: Stockholm
Re: advanced tapered evalutation
I understand that
I remember you helped me evaluate the following position http://www.talkchess.com/forum/viewtopi ... 87&t=47254 which I think might be a very difficult position to evaluate. Maybe the queen can escape but it is too deep for me to see.
I remember you helped me evaluate the following position http://www.talkchess.com/forum/viewtopi ... 87&t=47254 which I think might be a very difficult position to evaluate. Maybe the queen can escape but it is too deep for me to see.