Prehistoric major bugs in Transposition table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Prehistoric major bugs in Transposition table

Post by Henk »

I found out today that updating of transposition table in my chess program was totally wrong. I called methods like entry.UpdateLowerbound when the new value was not a lower bound. I thought that lower bound meant lower bound of the old (stored) value, but it meant that this method can only be called when new value is a lower bound.

These bugs have been there in my program for say at least a year. So I can start all over again, for many experiments are invalidated. Back to the past.
ZirconiumX
Posts: 1334
Joined: Sun Jul 17, 2011 11:14 am

Re: Prehistoric major bugs in Transposition table

Post by ZirconiumX »

Henk wrote:I found out today that updating of transposition table in my chess program was totally wrong. I called methods like entry.UpdateLowerbound when the new value was not a lower bound. I thought that lower bound meant lower bound of the old (stored) value, but it meant that this method can only be called when new value is a lower bound.

These bugs have been there in my program for say at least a year. So I can start all over again, for many experiments are invalidated. Back to the past.
Does fixing the bug make the program better or worse? If it makes it better, keep it. If it makes it worse, scrap the TT and start again.

Matthew:out
Some believe in the almighty dollar.

I believe in the almighty printf statement.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Prehistoric major bugs in Transposition table

Post by bob »

Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

You do a search and you want to store the result of that search so that if you see this position again, you can use the table entry and search nothing. There are three cases.

You complete the search at this ply, and you find that the actual value is > alpha AND is < beta. This is stored as an EXACT entry, along with the depth (called draft as it is stored in the table).

You complete the search at this ply, and you find that the actual value is <= alpha, which we call a fail low, which happens after all moves are searched and not could produce a score > alpha. You store the alpha bound, but it is flagged as UPPER, because alpha is the current lower bound, but the actual score has been proven to be <= that bound, but we don't know how much. So we know alpha is an UPPER bound and the actual score could even be less.

You complete the search at this ply, and you find that the actual value is >= beta, which we call a fail high, which happens when just one move produces a score >= beta. We store beta, and the flag "LOWER" because this is a lower bound on the score, since all we proved was that the score was >= beta, but not by how much.

Those seem "backward" when you first think about it. Then comes the other side of the coin when you reach a new position and probe the tt and get a hit.

First, you have to make sure that the draft (depth stored in tt) is >= current depth, so that the table represents a search deep enough to actually trust.

If you get an EXACT entry, you simply return the value and you are done, no more searching.

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.

The LOWER and UPPER flags make more sense on the probe than on the store. But if you use that terminology, you will be able to discuss hashing with others without introducing even more confusion into your thinking...
JVMerlino
Posts: 1357
Joined: Wed Mar 08, 2006 10:15 pm
Location: San Francisco, California

Re: Prehistoric major bugs in Transposition table

Post by JVMerlino »

bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

You do a search and you want to store the result of that search so that if you see this position again, you can use the table entry and search nothing. There are three cases.

You complete the search at this ply, and you find that the actual value is > alpha AND is < beta. This is stored as an EXACT entry, along with the depth (called draft as it is stored in the table).

You complete the search at this ply, and you find that the actual value is <= alpha, which we call a fail low, which happens after all moves are searched and not could produce a score > alpha. You store the alpha bound, but it is flagged as UPPER, because alpha is the current lower bound, but the actual score has been proven to be <= that bound, but we don't know how much. So we know alpha is an UPPER bound and the actual score could even be less.

You complete the search at this ply, and you find that the actual value is >= beta, which we call a fail high, which happens when just one move produces a score >= beta. We store beta, and the flag "LOWER" because this is a lower bound on the score, since all we proved was that the score was >= beta, but not by how much.

Those seem "backward" when you first think about it. Then comes the other side of the coin when you reach a new position and probe the tt and get a hit.

First, you have to make sure that the draft (depth stored in tt) is >= current depth, so that the table represents a search deep enough to actually trust.

If you get an EXACT entry, you simply return the value and you are done, no more searching.

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.

The LOWER and UPPER flags make more sense on the probe than on the store. But if you use that terminology, you will be able to discuss hashing with others without introducing even more confusion into your thinking...
Just to elaborate on one point: In those cases where you get a value from the ttable but are not able to return from the search (due to either of the cases that Bob described), you should at least put the associated move that you got from the ttable to the front of the list of moves to search this time around. If it was best at a shallower search, it will likely still be best at the current depth, hopefully saving you some time.

jm
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Prehistoric major bugs in Transposition table

Post by Sven »

bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

[...]

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.
Didn't you mix up LOWER and UPPER again in those last two sections above? I think you return alpha if tt-value <= alpha and you have an UPPER_BOUND entry, and you return beta if tt-value >= beta and you have a LOWER_BOUND entry.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

Sven Schüle wrote:
bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

[...]

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.
Didn't you mix up LOWER and UPPER again in those last two sections above? I think you return alpha if tt-value <= alpha and you have an UPPER_BOUND entry, and you return beta if tt-value >= beta and you have a LOWER_BOUND entry.
About updating entries in transposition table:

For the same position: all upper bounds at the same search depth should have the same values, for it's only an upper bound if all moves have been search through. So updating entries in that case makes no sense.

But lower bounds at the same depth can be different because of fail highs, which makes that not all moves are searched through. So updating lower bounds in transposition table entries can be useful if a new lower bound has been found which is greater than the old lower bound which was stored for the same position at the same depth.

At least my search algorithm only fails low if all moves have been searched through. But I'm not total sure if the values stored in my transposition tables are proven right, for reductions might be different for alpha and beta search parameters were different when entry was stored. Or should the applied reductions not be dependent on alpha and beta.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

Should the applied reductions/extensions never be dependent on alpha and beta otherwise TT entries can not be trusted ? This would mean null move reduction fails for I apply null move when position value is greater than beta.
Daniel Anulliero
Posts: 759
Joined: Fri Jan 04, 2013 4:55 pm
Location: Nice

Re: Prehistoric major bugs in Transposition table

Post by Daniel Anulliero »

Hi!

I understand what you mean , hashtables has always been my nightmare ! :-)
My first engines (DCP and Jars ) never had hashtables , and Yoda my third engine have that but play much worse...
It seems hashtables working not so bad in my new engine ISA , w'll see...
Do you plan to release skipper one day ?
All the bests
Dany
Isa download :
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Prehistoric major bugs in Transposition table

Post by Henk »

Henk wrote:
Sven Schüle wrote:
bob wrote:
Henk wrote:Looks like it plays even worse when fixing the bug, maybe because other stuff is tuned to work with this bug. But I want to work with code with least number of bugs.

I should have never trusted that code. But if you don't see what is wrong all the time. I think names of my transposition table helper methods were misleading too.
Get the terminology fixed in YOUR head first. Because it can be confusing and has confused many over the years.

[...]

If you get a LOWER entry, and the value is <= current alpha value, you can pretend you have searched everything and return alpha. If the table value is > current alpha, you can't do anything, because you have a window of opportunity to be wrong. We previously proved that the score was < table value (that's why we stored it) but if the actual alpha bound is less than that, then a score here could be less than value, but > alpha, which means we need to keep searching.

If you get an UPPER entry, and the value is >= current beta, you can pretend you did a search here and it failed high, and just return beta. But if the current tt value is < beta, you have a potential for the case where ttable-value < score < beta, so you really can't prove this should be a beta cutoff and you keep searching.
Didn't you mix up LOWER and UPPER again in those last two sections above? I think you return alpha if tt-value <= alpha and you have an UPPER_BOUND entry, and you return beta if tt-value >= beta and you have a LOWER_BOUND entry.
About updating entries in transposition table:

For the same position: all upper bounds at the same search depth should have the same values, for it's only an upper bound if all moves have been search through. So updating entries in that case makes no sense.

But lower bounds at the same depth can be different because of fail highs, which makes that not all moves are searched through. So updating lower bounds in transposition table entries can be useful if a new lower bound has been found which is greater than the old lower bound which was stored for the same position at the same depth.

At least my search algorithm only fails low if all moves have been searched through. But I'm not total sure if the values stored in my transposition tables are proven right, for reductions might be different for alpha and beta search parameters were different when entry was stored. Or should the applied reductions not be dependent on alpha and beta.
I don't think my last comments make sense. Upper bounds in TT need to be updated too.