Advice on stabalizing the eval needed

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: Advice on stabalizing the eval needed

Post by BubbaTough »

Well, I think you are going down an interesting path, and encourage you to continue. In my old code-base I tried something a bit different for learning eval, and had a hand-chosen min/max for eval characteristics (so learned values could not go crazy and get too low / high). I was never super enamored of the results, so consider this information sharing, not a suggestion. While I share others' suspicion of applying root analysis at leaf nodes, applying static piece tables at all nodes seems at least as suspicious (and very common). So please, keep experimenting and sharing.

-Sam
Tony

Re: Advice on stabalizing the eval needed

Post by Tony »

bob wrote:
Michael Sherwin wrote:
Uri Blass wrote:
Michael Sherwin wrote:In RomiChess the piece/square tables are created dynamically before each search. This can lead to some extreamly unbalanced evals leading to things like bad sacrifices and other pathalogical behaviors. So, what is best, limiting each square to a maximum or adding up all the squares and scaling them all down to some total maximum or scaling wPos - bPos so that it is at the most some +/- value or what?

Thanks!
I think that the best is not to create dynamically piece square tables before each search.
I am aware of the argument of root evaluation of squares vs. that of endleaf evaluation. I have decided to investigate the former as it is possible to use huge amounts of chess knowledge to build the tables at virtually no cost in time to the search. Also the move that is played at the root is far more important than any move that may or may not be made further up in the tree and information from the root is more strategically in tune with the first move. However, I still see the need for endleaf evaluation and that is where the pawn structure eval and two bishop bonus are done (and other things that are planned). One thing that I have not done yet is to bonus/malus pawn squares based on how the pawn placement affects the movements of the pieces. That should help a lot. None of this addresses my original question though.

One thing of note is, that I have worked mostly on the search these last two years and only have a couple of days invested in the eval so far. So, unless I am a genius ( and I'm not) and created the almost absolute best 'root evaluator' from the start, there is still much to be gained from two more years work on the eval.

So, other than the argument that the search is too deep and root evaluation is therefore too inaccurate on modern computers, what if any further argument is there against root evaluation?
It just doesn't work well. You are using information at position X to establish critical evaluation parameters. Then you evaluate position Y, which is _many_ plies removed from position X. At position X, king safety might have been meaningless, while at position Y, your king is terribly exposed. Processing at the root would base the eval on the position where your king is safe, and make terrible errors in the actual position being evaluated. It works when you search very shallow trees. But the deeper you can go, the farther the real positions you evaluate are removed from the root position where you set the terms up.
We were talking about psq tables, not about kingsafety.

If fe at the root I can already see where my pawns would be passed pawns, why not give a little extra bonus for these squares ?

It doesn't mean I don't have to evaluate passed pawns anymore, but it does mean that if I use these tables for move ordering or lazy eval, they will be more accurate than the static ones that were hardcoded.

Tony