Best was to Recognize Know Endgames?

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

Re: Best was to Recognize Know Endgames?

Post by Don » Sat Aug 03, 2013 4:46 pm

Steve Maughan wrote:Thanks everyone - some great input.

Here's what I'm thinking of trying first. I'm going to try something like Steven Edwards' suggestion. I'll hash the material and then use the "n" smallest bits to lookup the hash value in a table and compare the key. This table will be small. If I have 20 endgames I'd like to recognize (40 valid possible keys), I imagine I'll be able to get it in a 64 entry table. The table entry will contain a pointer to the function which will evaluate the endgame.

Thanks again - some good discussion!

Steve
If you have 20 endgames you will do just fine with a 32 entry table - just use linear probing. If end ending already in the slow go to the next one. But I really don't see any reason not to use a bigger table - 32, 64 or even 256 is small potatoes, not enough to have any noticeable impact on caching behavior.

You might also consider using a "Perfect hash" table - if you know in advance how many entries will be in the table just make sure they hash without collision. This can even be computed dynamically, basically have a table of random zobrist keys and if the table is not perfect, reinitialize the table by trying a different set of keys - and do this until there are no collisions. You will want the table to be large enough so that this is not an excercise in futility, for example a 32 entry table will be very difficult to compute without collisions on 20 entries. You could even size this dynamically if you are determined to make the table small (although I think this is waste of time) by trying to make a perfect hash and if this fails after N attempts, double the table size and try again .... like I say probably overkill but on the other hand there is not any real downside except perhaps a slow startup.

But you could have a procedure like this for the initial sizing of the table and key selection that is "hard coded" into the code - with the only downside that you have to redo when you add endings.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.

User avatar
lucasart
Posts: 3035
Joined: Mon May 31, 2010 11:29 am
Full name: lucasart
Contact:

Re: Best was to Recognize Know Endgames?

Post by lucasart » Sat Aug 03, 2013 4:52 pm

Steve Maughan wrote:Thanks everyone - some great input.

Here's what I'm thinking of trying first. I'm going to try something like Steven Edwards' suggestion. I'll hash the material and then use the "n" smallest bits to lookup the hash value in a table and compare the key. This table will be small. If I have 20 endgames I'd like to recognize (40 valid possible keys), I imagine I'll be able to get it in a 64 entry table. The table entry will contain a pointer to the function which will evaluate the endgame.

Thanks again - some good discussion!

Steve
To begin with you could do a few heuristic functions that return a boolean: either "draw", or "don't know".

The problem is that in order to make an extensible framework, there are many things an heuristic function could return:

0/ don't know

1/ it's a draw (for sure)

2/ it's probably drawish, but not sure: return an eval factor (and scale down the eval accordingly

3/ i don't know, but I can at least tell you that eval should be >= 0 or should be <= 0. For example in a KNKP, you know that white cannot win, and the only side that can win (if any) is black with the pawn (knight alone cannot mate while pawn can promote and KNKQ is generally won for black).

4/ Yet another possibility is to return win scores. In general it doesn't make sense and it's best to let other eval terms handle it normally. But in special cases like the famous KNBK endgame, you need to return a special eval (which incentivises the KNB to push the lone K into a corner of the colour of the B).
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.

User avatar
hgm
Posts: 23376
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Best was to Recognize Know Endgames?

Post by hgm » Sat Aug 03, 2013 5:08 pm

I have always been in doubt whether it would be better to hash the material table (and calculate the entry on the fly if you get a miss), or pre-calculate the full table, indexed as a multi-dimensional array. The hit rate on hashing is usually extremely high, so the on-the-fly calculation doesn't have a significant impact on performance. An you can achieve that even with a small table. OTOH, I have the feeling that the hashing does in software what the cache hardware would transparently do for you. If the hit rate in a small hash table would be very high, it means that only a negligible part of the huge table is in use (and thus cached) at any time. And indeed, memory is cheap as long as you don't access it.

The table can get uncomfortably large, if you want to distinguish Queens from Chancellors and Archbishops, though. (And KQBKC is won, where KQBKQ and KCBKC are draw, so it does pay to distinguish them...)

About KBNK and the like: Isn't it simply enough to multiply the King end-game piece-square table (centralizing the King) of the bare King by a sizable factor (say 10), so that driving that bare King to the corner bacomes the overwhelmingly dominant concern? In Spartacus switch King PST as soon as in the root the King becomes bare. That always worked for me.

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

Re: Best was to Recognize Know Endgames?

Post by Don » Sat Aug 03, 2013 5:24 pm

lucasart wrote:
Steve Maughan wrote:Thanks everyone - some great input.

Here's what I'm thinking of trying first. I'm going to try something like Steven Edwards' suggestion. I'll hash the material and then use the "n" smallest bits to lookup the hash value in a table and compare the key. This table will be small. If I have 20 endgames I'd like to recognize (40 valid possible keys), I imagine I'll be able to get it in a 64 entry table. The table entry will contain a pointer to the function which will evaluate the endgame.

Thanks again - some good discussion!

Steve
To begin with you could do a few heuristic functions that return a boolean: either "draw", or "don't know".

The problem is that in order to make an extensible framework, there are many things an heuristic function could return:

0/ don't know

1/ it's a draw (for sure)

2/ it's probably drawish, but not sure: return an eval factor (and scale down the eval accordingly

3/ i don't know, but I can at least tell you that eval should be >= 0 or should be <= 0. For example in a KNKP, you know that white cannot win, and the only side that can win (if any) is black with the pawn (knight alone cannot mate while pawn can promote and KNKQ is generally won for black).

4/ Yet another possibility is to return win scores. In general it doesn't make sense and it's best to let other eval terms handle it normally. But in special cases like the famous KNBK endgame, you need to return a special eval (which incentivises the KNB to push the lone K into a corner of the colour of the B).
You are pretty much describing what Komodo does. We have a table with fields for returning a material score, a positional "multiplier" and a function pointer - which is normally NULL. If there is no function pointer we return the table value for the material score then compute the positional score with the normal evaluation function. However we multiply the value returned by the multiplier. (We don't want a dead draw to come out as 40 centipawns because there is an advanced passed pawn for example.) If the multiplier is 0.0 we don't even call the evaluation function, the table value is returned as is.

If there is a function it will simple do what the table does instead of the table values (which are ignored.) So it will compute a material score and also provide a multipler because sometimes we still want to use Komodo's normal positional evaluation function. In this way it is very flexible. We can do almost anything we want, we can use Komodo's evaluation, not use it, use it partially, etc. There are good reasons to do all of those combinations of things in simple endings.

We have always had the simple ending table but it has been reworked significantly since 5.1r2 and it will make Komodo 6 much stronger and one of our goals is to have much stronger endgame play. Our new material hashing is new to Komodo but has existed in chess programs for years and one of those things I have never gotten around to. So this opens the door to more sophisticated calculations of unbalanced positions which take time to compute in the old Komodo and thus is a barrier to adding more of this kind of knowledge to Komodo. Komodo 5.1 has to take a small hit for every additional piece of knowledge we add, even for things like the bishop pair.

We have already seen some non-trivial ELO gain - I think we are already ahead of CCT now so it's already paying off although some of this gain is with other things.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.

syzygy
Posts: 4441
Joined: Tue Feb 28, 2012 10:56 pm

Re: Best was to Recognize Know Endgames?

Post by syzygy » Sat Aug 03, 2013 5:34 pm

hgm wrote:I have always been in doubt whether it would be better to hash the material table (and calculate the entry on the fly if you get a miss), or pre-calculate the full table, indexed as a multi-dimensional array. The hit rate on hashing is usually extremely high, so the on-the-fly calculation doesn't have a significant impact on performance. An you can achieve that even with a small table. OTOH, I have the feeling that the hashing does in software what the cache hardware would transparently do for you. If the hit rate in a small hash table would be very high, it means that only a negligible part of the huge table is in use (and thus cached) at any time. And indeed, memory is cheap as long as you don't access it.
Apart from the fact that a huge table takes up RAM that might be useful for other purposes (e.g. for a larger TT table or for caching tablebases), both methods should be about equally cheap for the reason that a search will see relatively few unique material signatures. I guess the question is, is it cheaper to (incrementally?) calculate the multidimensional index or is it cheaper to (incrementally) calculate the material signature hash key.
About KBNK and the like: Isn't it simply enough to multiply the King end-game piece-square table (centralizing the King) of the bare King by a sizable factor (say 10), so that driving that bare King to the corner bacomes the overwhelmingly dominant concern? In Spartacus switch King PST as soon as in the root the King becomes bare. That always worked for me.
For KBNK it seems important to me to distinguish between good corners and bad corners. There will always be a time control short enough that the search will not be able to figure out a win when the opponent king is in a bad corner.

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

Re: Best was to Recognize Know Endgames?

Post by Don » Sat Aug 03, 2013 5:37 pm

hgm wrote:I have always been in doubt whether it would be better to hash the material table (and calculate the entry on the fly if you get a miss), or pre-calculate the full table, indexed as a multi-dimensional array. The hit rate on hashing is usually extremely high, so the on-the-fly calculation doesn't have a significant impact on performance. An you can achieve that even with a small table. OTOH, I have the feeling that the hashing does in software what the cache hardware would transparently do for you. If the hit rate in a small hash table would be very high, it means that only a negligible part of the huge table is in use (and thus cached) at any time. And indeed, memory is cheap as long as you don't access it.

The table can get uncomfortably large, if you want to distinguish Queens from Chancellors and Archbishops, though. (And KQBKC is won, where KQBKQ and KCBKC are draw, so it does pay to distinguish them...)

About KBNK and the like: Isn't it simply enough to multiply the King end-game piece-square table (centralizing the King) of the bare King by a sizable factor (say 10), so that driving that bare King to the corner bacomes the overwhelmingly dominant concern? In Spartacus switch King PST as soon as in the root the King becomes bare. That always worked for me.
Even for chess it's almost impossible to pre-calculate EVERY possible material signature without imposing some limitations. You can have legal positions with 10 knights for one side for example. And there are people who will set up such positions so you have to be able to handle them.

However the hit rate is so high that it makes sense to calculate material on the fly. So that is what we do. It's far more efficient that pawn structure hashing for example, especially when king positions are part of the key space.

For a high performance program you possibly could use your scheme with some limitations provided there is a fall-back scheme in place. But there is a little performance hit for testing for that - so I think it's better to just to dynamically maintain a small table. I love the idea though. It might work if you assume that there will never be more than 2 knights, 1 queens, etc - and have a fall-back scheme such as using a more conventional table. Did you compute the space required? I think it's pretty huge.

One interesting aspect of this is that for every promoted piece there is one less pawn too, so a simple flat table probably wastes a lot of space.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

Re: Best was to Recognize Know Endgames?

Post by Don » Sat Aug 03, 2013 5:53 pm

For KBNK it seems important to me to distinguish between good corners and bad corners. There will always be a time control short enough that the search will not be able to figure out a win when the opponent king is in a bad corner.
Komodo does handle that ending but the top programs (and perhaps most others?) will win this anyway, even without any help using the normal evaluation function at long time controls. They can actually see the mate from a huge distance. Still, I love to see Komodo quickly and efficiently win this at fast time controls.

In my routine I ignore the normal evaluation function and give points for enemy king distance to the appropriate corner and also a bonus for having the friendly king centralized (that bonus is only slight) and a little bigger bonus for being close the enemy king. At end nodes you want to see all of these things as a general feel for progress towards mate: 1. the enemy king near the proper corner, 2. the computers king fairly close to the enemy king and 3. the friendly king closer to the center than the enemy king.

In the old days I had a mop-up evaluation but we don't really need that these days for the basic mates. Our king table already gives progressive penalties for being in the corner so the program wants to force the enemy king into the corner. A "progressive" bonus or penalty means it's NOT LINEAR - so the friendly king does not mind being slightly off center if it means the enemy king is even more off-center.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.

Sven
Posts: 3809
Joined: Thu May 15, 2008 7:57 pm
Location: Berlin, Germany
Full name: Sven Schüle
Contact:

Re: Best was to Recognize Know Endgames?

Post by Sven » Sat Aug 03, 2013 6:35 pm

lucasart wrote:3/ i don't know, but I can at least tell you that eval should be >= 0 or should be <= 0. For example in a KNKP, you know that white cannot win, and the only side that can win (if any) is black with the pawn (knight alone cannot mate while pawn can promote and KNKQ is generally won for black).
Bad example since there are exceptions ...
[d]8/8/8/8/8/p2N4/k1K5/8 w - - 0 1
A better example would be KNKQ.

Sven

Sven
Posts: 3809
Joined: Thu May 15, 2008 7:57 pm
Location: Berlin, Germany
Full name: Sven Schüle
Contact:

Re: Best was to Recognize Know Endgames?

Post by Sven » Sat Aug 03, 2013 6:39 pm

Don wrote:3. the friendly king closer to the center than the enemy king.
Is that third criterion necessary for mating, resp. for finding the mate faster?

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

Re: Best was to Recognize Know Endgames?

Post by Don » Sat Aug 03, 2013 6:59 pm

Sven Schüle wrote:
Don wrote:3. the friendly king closer to the center than the enemy king.
Is that third criterion necessary for mating, resp. for finding the mate faster?
Yes, it does help because the friendly king serves a big role in pushing the king to the edge and corner. Give two otherwise equal positions you are much farther from mate if the friendly king is on the other side of the board.

Is it absolutely necessary? No, king centralization will automatically put the friendly kind closer to the enemy king but won't make the finer distinctions that can be helpful.

With LMR and heavy pruning and other reductions you want to make the finest distinctions possible so that the moves that are good get put to the front of the list (and thus not reduced.) So the game I play when building these routines is to imagine I'm playing the ending and what moves are best - and then I ask myself if the scoring function would put these moves near the front of the list. If my king is several squares away from the king will it move toward the king or toward some random center square? They are not always the same. Sometimes a move AWAY the king is STILL equally centered or even BETTER centered such as e4 to d4 when the enemy king is on h5 so without the king heuristic there is no scoring penalty for moving in the opposite direction of the king which is obviously wrong.

I don't do this in the other basic mates, the only reason I have it for KNBK is that I specifically wrote a handler for it. I think almost any program would still mate at any reasonable level even if all you do is identify the correct corner.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.

Post Reply