Using that logic, on 64 bit machines going to 128 bit signatures will not hurt performance so go to 128...Michael Sherwin wrote:I tried it with RomiChess. Performance did not suffer, so I see no reason not to use 96 bits.mathmoi wrote:Hi,
We all know that we need to check the validity of a TT move before we execute it on the board because otherwise we could end up corrupting the engine's internal board - and probably crash the engine.
What are the probabilities of this occuring? Let's say with a 8Gb TT and searching 15 millions nps (both values are a higher than what I expect to be able to achieve with my current hardware).
8*1024^3/16 = 536870912 entries in the TT
I need the 29 lower bits of the hash to find the index in the TT.
So the remainings 33 bits will be the ones to cause a hash collision. That means I'll get a collisions every 2^33 probes.
2^33/15000000 = 572.6 seconds between each collisions.
That means that if I don't check the validity of the move in the TT I'll probably crash every 9.5 minutes.
However if I use a 96 bits hash size (32+64bits) I can use the 29 lower bits of the 32 part to get an index and store the full 64 bits part in the TT to look for collisions. Now i'll get a collision every 2^64 probes or :
2^64/15000000 = 1229782938247 seconds between each collisions or about 40000 years.
Now my question to you. Can it be a good idea to use a 96 bits hash and completely avoid the validity check of the TT moves (since we don't mind crashing every 40000 years)?
MP
There is a computational cost that an optimized program will show. And storing an extra 32 or 64 bits of signature means that the hash entry will be 32 or 64 bits larger, reducing the total number of entries for a given hash size.