Anyone tried it?
I think it's a good idea to try something like ANOVA vs large set of positions to find some factors that inluents on PSQ. The possible candidates are:
1. Open/halfopen files,
2. Both kings sides,
3. Pawn center formation
and so on.
This approach can generalize a lot of eval terms such as:
1. Rook at open/halfopen file,
2. Rook/queen at 7th/8th row,
3. Knight/bishop outposts,
4. Trapped pieces etc.
Some months ago I made a simple utility to build classification trees and I think it would be useful for chess. Anyone already tried this approach?
PSQ tables depending on king sides, pawn patterns etc.
Moderators: hgm, Rebel, chrisw
-
- Posts: 227
- Joined: Mon Sep 12, 2011 11:27 pm
- Location: Moscow, Russia
PSQ tables depending on king sides, pawn patterns etc.
The Force Be With You!
-
- Posts: 893
- Joined: Mon Jan 15, 2007 11:23 am
- Location: Warsza
Re: PSQ tables depending on king sides, pawn patterns etc.
Where would You like to make this analysis and resetting of piece/square tables?
If at root, then it is old idea of preprocessing, probably not feasible at today's depth, though RomiChess still sticks to it.
If at each eval() call, then You are basically changing data representation to soemething that is probably more elegant and obviously a bit slower (You would first apply a bonus for an open file and only later check whether there is a rook that can take advantage of it). There might be a way to make it work in a knowledge-heavy program, provided that eval is done at each node and tables are recycled for move ordering.
If at each change of pawn structure and king position, then it's a matter of efficiency, especially of guarding against slowdown in the endgame, of clever usage of pawn hash table etc. I'd be afraid of somewhat bloated code and data structures here.
Of course, it's just my 2 cents, You are much better programmer and You can make it work.
If at root, then it is old idea of preprocessing, probably not feasible at today's depth, though RomiChess still sticks to it.
If at each eval() call, then You are basically changing data representation to soemething that is probably more elegant and obviously a bit slower (You would first apply a bonus for an open file and only later check whether there is a rook that can take advantage of it). There might be a way to make it work in a knowledge-heavy program, provided that eval is done at each node and tables are recycled for move ordering.
If at each change of pawn structure and king position, then it's a matter of efficiency, especially of guarding against slowdown in the endgame, of clever usage of pawn hash table etc. I'd be afraid of somewhat bloated code and data structures here.
Of course, it's just my 2 cents, You are much better programmer and You can make it work.
Pawel Koziol
http://www.pkoziol.cal24.pl/rodent/rodent.htm
http://www.pkoziol.cal24.pl/rodent/rodent.htm
-
- Posts: 1600
- Joined: Mon Feb 21, 2011 9:48 am
Re: PSQ tables depending on king sides, pawn patterns etc.
Regarding this:
2. Both kings sides,
3. Pawn center formation
It's one of the things that give more personality to my engine,
for WHITE Pawns and SCORE( OPENING,ENDGAME):
Still experimenting with the idea of ​​continuing with other pieces, or as you say, those other ideas....
Jose.
2. Both kings sides,
3. Pawn center formation
It's one of the things that give more personality to my engine,
for WHITE Pawns and SCORE( OPENING,ENDGAME):
Code: Select all
if ((whiteBitboardKing & SideQueen) && (blackBitboardKing & SideKing))
{
Value += PAWN_QK_W[b]; //[b]=SCORE( OPENING,ENDGAME)
}
if ((whiteBitboardKing & SideQueen) && (blackBitboardKing & SideQueen))
{
Value += PAWN_QQ_W[b];
}
if ((whiteBitboardKing & SideKing) && (blackBitboardKing & SideQueen))
{
Value += PAWN_KQ_W[b];
}
if ((wBitboardK & SideKing) && (blackBitboardKing &SideKing))
{
Value += PAWN_KK_W[b];
}
#ifdef CENTER
if ((blackBitboardKing &CenterBoard)&&(!whiteBitboardKing &CenterBoard))
{
Value += PAWN_C_W[b];
}
Jose.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: PSQ tables depending on king sides, pawn patterns etc.
It used to be the way most did this. But it has a serious flaw. If you set up the PST values at the root of the tree, and then search 30+ plies, the terminal positions are so far removed from the root the PST values are almost meaningless.Sergei S. Markoff wrote:Anyone tried it?
I think it's a good idea to try something like ANOVA vs large set of positions to find some factors that inluents on PSQ. The possible candidates are:
1. Open/halfopen files,
2. Both kings sides,
3. Pawn center formation
and so on.
This approach can generalize a lot of eval terms such as:
1. Rook at open/halfopen file,
2. Rook/queen at 7th/8th row,
3. Knight/bishop outposts,
4. Trapped pieces etc.
Some months ago I made a simple utility to build classification trees and I think it would be useful for chess. Anyone already tried this approach?
For your example. An open file at the root might not be open 30+ plies into the search, a closed file at the root might well be open 30+ plies into the search. The list goes on and on.
-
- Posts: 227
- Joined: Mon Sep 12, 2011 11:27 pm
- Location: Moscow, Russia
Re: PSQ tables depending on king sides, pawn patterns etc.
Robert, Pawel, it's not about game-time processing. It's about to have not just opening/endgame PSQs, but also some additional tables to correct PSQ scores, for example:
1. Tables for 1) white king kingside and black king kingside, 2) white king kingside, black king queenside, 3) white king queenside, black king kingside, 4) both kings queenside or maybe another way — tables to correct PSQs depending on own king at H-file and own king A-file with scaling depending on file and same tables to correct PSQ depending on opponent king file.
2. Tables with PSQ ajustments for every open files formation (8 bit pattern); you're really don't need 256 tables, you can join most similar tables using cluster analysis (ANOVA) and also you're able to join values inside tables using some polynomials — there are a lot of ways how to do it — but the general approach is to produce eval terms using statistical methods to reach not only near-to-optimal values of terms but also a near-to-optimal set of terms themselves.
1. Tables for 1) white king kingside and black king kingside, 2) white king kingside, black king queenside, 3) white king queenside, black king kingside, 4) both kings queenside or maybe another way — tables to correct PSQs depending on own king at H-file and own king A-file with scaling depending on file and same tables to correct PSQ depending on opponent king file.
2. Tables with PSQ ajustments for every open files formation (8 bit pattern); you're really don't need 256 tables, you can join most similar tables using cluster analysis (ANOVA) and also you're able to join values inside tables using some polynomials — there are a lot of ways how to do it — but the general approach is to produce eval terms using statistical methods to reach not only near-to-optimal values of terms but also a near-to-optimal set of terms themselves.
The Force Be With You!
-
- Posts: 227
- Joined: Mon Sep 12, 2011 11:27 pm
- Location: Moscow, Russia
Re: PSQ tables depending on king sides, pawn patterns etc.
Yes, that's exactly what I mean. You're also able to use some "scaling" techniques to avoid sharp bound problem (i.e. king at e/d-file). But the important idea is to produce all these tables using statistical analysis.velmarin wrote:Regarding this:
2. Both kings sides,
3. Pawn center formation
It's one of the things that give more personality to my engine,
for WHITE Pawns and SCORE( OPENING,ENDGAME):Still experimenting with the idea of ​​continuing with other pieces, or as you say, those other ideas....Code: Select all
if ((whiteBitboardKing & SideQueen) && (blackBitboardKing & SideKing)) { Value += PAWN_QK_W[b]; //[b]=SCORE( OPENING,ENDGAME) } if ((whiteBitboardKing & SideQueen) && (blackBitboardKing & SideQueen)) { Value += PAWN_QQ_W[b]; } if ((whiteBitboardKing & SideKing) && (blackBitboardKing & SideQueen)) { Value += PAWN_KQ_W[b]; } if ((wBitboardK & SideKing) && (blackBitboardKing &SideKing)) { Value += PAWN_KK_W[b]; } #ifdef CENTER if ((blackBitboardKing &CenterBoard)&&(!whiteBitboardKing &CenterBoard)) { Value += PAWN_C_W[b]; }
Jose.
The Force Be With You!
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: PSQ tables depending on king sides, pawn patterns etc.
PSQ tables are usually updated incrementally, so that means one need to go over all pieces to sum up their PSQT contribution to use your method. If that is the case wouldn't it be easier to do it using standard evaluation? In most case pre-calculation for many patterns is preferred over online calculation, such as the one you suggested for the 8bit open file pattern, if you can save significant eval time. Also an open file mostly affects only the placement of the king, since all other pieces can jump over it. I guess that many engines have open-files evaluation in their king-safety terms that move the king away from them.Sergei S. Markoff wrote:Robert, Pawel, it's not about game-time processing. It's about to have not just opening/endgame PSQs, but also some additional tables to correct PSQ scores, for example:
1. Tables for 1) white king kingside and black king kingside, 2) white king kingside, black king queenside, 3) white king queenside, black king kingside, 4) both kings queenside or maybe another way — tables to correct PSQs depending on own king at H-file and own king A-file with scaling depending on file and same tables to correct PSQ depending on opponent king file.
2. Tables with PSQ ajustments for every open files formation (8 bit pattern); you're really don't need 256 tables, you can join most similar tables using cluster analysis (ANOVA) and also you're able to join values inside tables using some polynomials — there are a lot of ways how to do it — but the general approach is to produce eval terms using statistical methods to reach not only near-to-optimal values of terms but also a near-to-optimal set of terms themselves.
How is ANOVA (analysis of variance) going to be used to build reduce your database, besides king-side/queen-side symmetries that you can exploit without using additional analysis. You are assuming in your posts we understand your procedures, especially the first one which gives away nothing , so it would be good if you can explain more.
-
- Posts: 227
- Joined: Mon Sep 12, 2011 11:27 pm
- Location: Moscow, Russia
Re: PSQ tables depending on king sides, pawn patterns etc.
PSQ is piece-square tables by definition Of course in the case of weight corrections based on other factors such as king position etc you should probably do it statically, but I think it will not cause huge slowdown because in modern engine you already have iteration through pieces — to count mobility, outposts, king pressure and so on.Daniel Shawul wrote: PSQ tables are usually updated incrementally, so that means one need to go over all pieces to sum up their PSQT contribution to use your method. If that is the case wouldn't it be easier to do it using standard evaluation?
The problem is that we have some assumptions and "rationalizations" about eval terms, but I think that it would be statistically proven. May be it affects something, may be not — for example it may be worth to "close" open vertical near your king with pawn-protected bishop and so on.Daniel Shawul wrote:In most case pre-calculation for many patterns is preferred over online calculation, such as the one you suggested for the 8bit open file pattern, if you can save significant eval time. Also an open file mostly affects only the placement of the king, since all other pieces can jump over it.
The question is not about what eval terms to use but about how to create some method to build at least close-to-optimal set of terms with their evaluations. And probably there are such a methods, but you should at the first step create some generalized eval term definitions which will be able to include almost all usual eval terms as a subset.
Well, I'm using very simple way to tune my eval. I have MS Excel table with some huge set of positions from real games, at least >2 plies before capture/promotion and with abs(original eval) < 300 to remove most part of the "tactical noise". For each position I have info about game result and some decomposition of my eval (material eval, parts of king safety and so on) so I can change some eval params in Excel which will produce eval change. Also I have very simple polynomial to approximate dependency between game result end eval: result = eval * 0,0024 (-1 = lost, 0 = draw, 1 = win).Daniel Shawul wrote:How is ANOVA (analysis of variance) going to be used to build reduce your database, besides king-side/queen-side symmetries that you can exploit without using additional analysis. You are assuming in your posts we understand your procedures, especially the first one which gives away nothing , so it would be good if you can explain more.
Here is deviation formula that I use (I've made a lot of expiriments with it, including using number of plies before win/lost and so on, but at practice this one is most effective to improve engine self-play results):
dev = (1 + ABS(old_eval – new_eval) * 0,0024) * (real_result – new_eval * 0,0024) ^ 2
So I'm trying to minimize sum of dev for all positions tuning some eval terms (alone and in some "buckets").
So then how to use ANOVA. Let's assume you have some set of ways to split your set of positions into 2 parts. "Having factor X" and not having it. Let's try to iterate through the whole set of factors and found that one that will split your positions set into two parts with max difference between their mean values. Of course you should include some statistical significance control that will take in account part sizes. You can repeat this split procedure recursively for every part and finally you will obtain some classification tree. Let me show you:
Code: Select all
int CannotBeSkipped(int alpha_minus_static_eval_plus_gain, int static_eval, int optimism_opponent, int gain, int pieces_opponent)
{
int objective;
if (alpha_minus_static_eval_plus_gain < -558) // 26804
{
objective = 1;
}
else // 239808
{
if (alpha_minus_static_eval_plus_gain < -40) // 106613
{
objective = 1;
}
else // 133195
{
if (alpha_minus_static_eval_plus_gain < -11) // 13704
{
objective = 1;
}
else // 119491
{
if (alpha_minus_static_eval_plus_gain < 21) // 12462
{
objective = 1;
}
else // 107029
{
if (alpha_minus_static_eval_plus_gain < 58) // 10789
{
objective = 1;
}
else // 96240
{
if (gain < 478) // 86503
{
if (alpha_minus_static_eval_plus_gain < 113) // 8734
{
objective = 1;
}
else // 77769
{
if (pieces_opponent < 10) // 7912
{
if (static_eval < -498) // 3785
{
objective = 0;
}
else // 4127
{
objective = 1;
}
}
else // 69857
{
if (alpha_minus_static_eval_plus_gain < 183) // 6996
{
if (optimism_opponent < 26) // 3210
{
objective = 0;
}
else // 3786
{
objective = 1;
}
}
else // 62861
{
objective = 0;
}
}
}
}
else // 9737
{
if (alpha_minus_static_eval_plus_gain < 143) // 3001
{
objective = 1;
}
else // 6736
{
if (alpha_minus_static_eval_plus_gain < 395) // 3485
{
objective = 1;
}
else // 3251
{
objective = 0;
}
}
}
}
}
}
}
}
return objective;
}
So, this approach can be used to find most promising eval terms. You're able to analyze signed deviations distribution vs set of possible factors, trying to explain current evaluation "tails". Moreover ANOVA is able to suggest eval weights.
More questions are welcome.
The Force Be With You!
-
- Posts: 7220
- Joined: Mon May 27, 2013 10:31 am
Re: PSQ tables depending on king sides, pawn patterns etc.
I removed mobility from evaluation because I thought it was too expensive. I replaced it by some surrogate pre computed mobility. I don't know if that was a good decision. Maybe it is better to update mobility on lower depths.
So be sure you don't introduce unnecessary performance bottlenecks.
So be sure you don't introduce unnecessary performance bottlenecks.
-
- Posts: 227
- Joined: Mon Sep 12, 2011 11:27 pm
- Location: Moscow, Russia
Re: PSQ tables depending on king sides, pawn patterns etc.
Why you don't know? Why not just test it on a huge set of games?Henk wrote:I removed mobility from evaluation because I thought it was too expensive. I replaced it by some surrogate pre computed mobility. I don't know if that was a good decision.
The question is about neccessity/unneccessity. It's obviously that any eval term has it's speed cost. So you should be sure that this cost is less then benefit.Henk wrote:Maybe it is better to update mobility on lower depths.
So be sure you don't introduce unnecessary performance bottlenecks.
The Force Be With You!