Page 1 of 1

Transposition Table - Updating entries

Posted: Sat Jun 25, 2016 12:05 am
by AndrewGrant
I was messing around with fine#70 and found that my table replacement scheme made finding the correct move nearly impossible. I do that standard of replace the oldest low-draft entry, then replace the lowest draft entry always. But I had a special condition for storing a position which was already stored.

Something like this

Code: Select all

if (newEntry.hash == currentEntry.hash
  && newEntry.depth >= currentEntry.depth)
  // Do the replacement 
When I removed all special cases for storing entries that were already in the table I was able to complete fine#70 instantly.

However, it seems wasteful to always throw out the old repeat. Maybe a lower draft newEntry should only be inserted if is has a different bestMove, or if it has stricter bounds.

Do you guys have special cases for storing repeated entries?

Re: Transposition Table - Updating entries

Posted: Sat Jun 25, 2016 3:10 am
by bob
AndrewGrant wrote:I was messing around with fine#70 and found that my table replacement scheme made finding the correct move nearly impossible. I do that standard of replace the oldest low-draft entry, then replace the lowest draft entry always. But I had a special condition for storing a position which was already stored.

Something like this

Code: Select all

if (newEntry.hash == currentEntry.hash
  && newEntry.depth >= currentEntry.depth)
  // Do the replacement 
When I removed all special cases for storing entries that were already in the table I was able to complete fine#70 instantly.

However, it seems wasteful to always throw out the old repeat. Maybe a lower draft newEntry should only be inserted if is has a different bestMove, or if it has stricter bounds.

Do you guys have special cases for storing repeated entries?
Think about this:

First thing you do is probe. If the entry you find does NOT terminate the search, why would you hang on to it rather than replace with the result of THIS search which you were forced to do due to lack of useful information from the old entry? You can't allow multiple entries with the same signature but different drafts or whatever. They could have conflicting information and you would have to break the tie. So break the tie and store/keep whatever seems better. In this case, the result of a search you complete ALWAYS has to be stored, no option other than in choosing what you will overwrite..

Re: Transposition Table - Updating entries

Posted: Sun Jun 26, 2016 2:19 pm
by ymatioun
I had the same TT replacement logic, and tried changing it to "always replace"; now searching fine #70 position to depth 40 takes only 78 msec, when it used to take 17 sec - speed-up of over 200!
I'll test this change further before accepting it (and i don't expect a large ELO gain), but it is truly amazing that something so simple can have such a large impact, even if it is limited to a few positions.
PS Bob's explanation makes sense; it is the magnitude of the effect that surprised me.

Re: Transposition Table - Updating entries

Posted: Sun Jun 26, 2016 3:00 pm
by hgm
AndrewGrant wrote:However, it seems wasteful to always throw out the old repeat. Maybe a lower draft newEntry should only be inserted if is has a different bestMove, or if it has stricter bounds.

Do you guys have special cases for storing repeated entries?
Well, Bob already said it: the old entry is completely useless, because it could not prevent you to redo the search on the position it was for. Despite it having a high depth. Keeping it just wastes an entry in the TT, which cannot be used to store something potentially useful. And that is especially bad if the useless entry has a high depth, because then other entries will probably refrain from overwriting it. Because at the point you need to store those, the engine does not know that high-depth entry is useless. It only knows that immediately after the re-search of the same node.

So it would be detrimental to keep it. But of course not nearly as detrimental as making it prevent useful info about the same node (albeit with lower depth) from being stored at all. As you were doing initially.