Prehistoric major bugs in Transposition table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

I think I call my engine bug palace. I encountered too many bugs lately.
Better add extra test cases than changing code too much.
JVMerlino
Posts: 1357
Joined: Wed Mar 08, 2006 10:15 pm
Location: San Francisco, California

Re: Prehistoric major bugs in Transposition table

Post by JVMerlino »

Henk wrote:I think I call my engine bug palace. I encountered too many bugs lately.
Better add extra test cases than changing code too much.
Does your move generation work perfectly yet? If not, then you shouldn't be working on anything else.

jm
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

JVMerlino wrote:
Henk wrote:I think I call my engine bug palace. I encountered too many bugs lately.
Better add extra test cases than changing code too much.
Does your move generation work perfectly yet? If not, then you shouldn't be working on anything else.

jm
I changed it last weeks so I have to create perft() and test. My engine can't load arbitrary positions so it won't be that easy.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Prehistoric major bugs in Transposition table

Post by bob »

Sven Schüle wrote:
bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

[...]

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.
Didn't you mix up LOWER and UPPER again in those last two sections above? I think you return alpha if tt-value <= alpha and you have an UPPER_BOUND entry, and you return beta if tt-value >= beta and you have a LOWER_BOUND entry.
Certainly did... thanks...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Prehistoric major bugs in Transposition table

Post by bob »

Henk wrote:
Sven Schüle wrote:
bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

[...]

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.
Didn't you mix up LOWER and UPPER again in those last two sections above? I think you return alpha if tt-value <= alpha and you have an UPPER_BOUND entry, and you return beta if tt-value >= beta and you have a LOWER_BOUND entry.
About updating entries in transposition table:

For the same position: all upper bounds at the same search depth should have the same values, for it's only an upper bound if all moves have been search through. So updating entries in that case makes no sense.

But lower bounds at the same depth can be different because of fail highs, which makes that not all moves are searched through. So updating lower bounds in transposition table entries can be useful if a new lower bound has been found which is greater than the old lower bound which was stored for the same position at the same depth.

At least my search algorithm only fails low if all moves have been searched through. But I'm not total sure if the values stored in my transposition tables are proven right, for reductions might be different for alpha and beta search parameters were different when entry was stored. Or should the applied reductions not be dependent on alpha and beta.
I don't follow. With negamax, what is a lower bound at one ply is an upper bound at the previous or next ply. If you search all moves at any ply with no improvement to alpha, you return alpha, obviously. So I assume we are the same there.

Ignore reductions and extensions. Remember, you probe at the TOP of search, before you have done any modifications to depth at all. So you should store at the end of search using the same depth that was passed in, so that you could re-use this ttable entry here if you came right back for some reason. And the depth would be identical for the ttable entry and the current depth that was passed in.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

bob wrote:
Henk wrote:
Sven Schüle wrote:
bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

[...]

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.
Didn't you mix up LOWER and UPPER again in those last two sections above? I think you return alpha if tt-value <= alpha and you have an UPPER_BOUND entry, and you return beta if tt-value >= beta and you have a LOWER_BOUND entry.
About updating entries in transposition table:

For the same position: all upper bounds at the same search depth should have the same values, for it's only an upper bound if all moves have been search through. So updating entries in that case makes no sense.

But lower bounds at the same depth can be different because of fail highs, which makes that not all moves are searched through. So updating lower bounds in transposition table entries can be useful if a new lower bound has been found which is greater than the old lower bound which was stored for the same position at the same depth.

At least my search algorithm only fails low if all moves have been searched through. But I'm not total sure if the values stored in my transposition tables are proven right, for reductions might be different for alpha and beta search parameters were different when entry was stored. Or should the applied reductions not be dependent on alpha and beta.
I don't follow. With negamax, what is a lower bound at one ply is an upper bound at the previous or next ply. If you search all moves at any ply with no improvement to alpha, you return alpha, obviously. So I assume we are the same there.

Ignore reductions and extensions. Remember, you probe at the TOP of search, before you have done any modifications to depth at all. So you should store at the end of search using the same depth that was passed in, so that you could re-use this ttable entry here if you came right back for some reason. And the depth would be identical for the ttable entry and the current depth that was passed in.
If search is started with a bigger alpha the opponent player may use less moves to refute that position than with a smaller alpha. So an upper bound need to be updated in TT too, for a search with a smaller alpha may return a smaller alpha.

[[This means that for a repeated search with a bigger alpha for the same position, no updating of an upper bound in TT is needed for that position ? Don't know at this moment if I can use that information]]
op12no2
Posts: 489
Joined: Tue Feb 04, 2014 12:25 pm
Full name: Colin Jenkins

Re: Prehistoric major bugs in Transposition table

Post by op12no2 »

Henk wrote:I found out today that updating of transposition table in my chess program was totally wrong. I called methods like entry.UpdateLowerbound when the new value was not a lower bound. I thought that lower bound meant lower bound of the old (stored) value, but it meant that this method can only be called when new value is a lower bound.
I found a similar bug recently. I handle BETA tt in the move loop (returns) then after the moves check the alpha that has been updating against the original alpha to decide on EXACT/ALPHA tt. But before that I was updating alpha with mate/stalemate which meant some EXACTs were in fact BETAs (CONTEMPT >= beta). I changed the mate/stalemate code to simply return scores immediately figuring it was a waste of a tt entry anyway because it's quick to get to (and there is no useful move to store) and hey presto my program has lost ELO :)

I also simply cannot get null move BETA tt entries to improve things either.