Hmmm...bob wrote:I recently played with a 64 bit hash entry which left 24 bits for the signature. Since I was using 8gb of hash, that turned into 32 + 24 bits and it led me to discover a very tricky condition my legal move check failed to recognize, because there were enough collisions that the oddball case (had to do with check, where moving a piece was legal in one position but not in the one that collided). Took a long time for it to happen, running on 20 cores, until I began to get suspicious about the condition, then I found a position that would crash within 10 minutes or so and tracked it down. My legality check doesn't deal with the case of moving a piece that is pinned on the king since that can't happen, except on collisions... Have not fixed it, but gave up on 8 byte hash entries until I decide what to do. I have a quick "PinnedOnKing()" function for endgames, but did not want to make the legal move test any slower since it is a rare problem (no crashes in over 10M games in fact, until the shorter hash signature became a problem.
A 64-bit entry is 8 bytes which means that there were just 1B entries in the table so with 24 bits of stored signature its 30+24 = 54 bits not 56 bits. I'm kind of curious, I've played what if with 64 bit TT entries, what exactly is the format of each entry you used?
Seems like a simple solution would be to add 8 more bits of stored signature to make it 30+32=62 bits and seven nine byte entries per bucket. Unless Crafty has problems with a 7 by 9 configuration. Or if you're really pinching bits you could use a 73 bit entry and still get 7 entries per bucket with 1 extra bit of signature stored for a total of 63-bits.
If you really want to pinch every last bit then you could use a 9-bit move, a 14-bit score (down to the centi-pawn level as a short integer). You still might get to a 64-bit entry depending on what else you are storing.
bob wrote:The gain for doubling is not a constant. It certainly drops off once the table holds everything needed, assuming a decent replacement policy. Or for the trivial case of going from 2 to 4 entries which will give nothing measurable at all today. A key point is where you are at 1/2 the optimal. Doubling can give a significant Elo gain. You can probably find some old threads on r.g.c.c dealing with this. It is a sort of normal distribution, where you get less until you get into the key zone for number of entries, then Elo jumps quite a bit, then starts to improve less with additional size. I found cases where ttable size could double the search time when it gets too small.
You can probably provide interesting information by matching lower 64 and whole signature and reporting when they don't agree... I did a similar test back in the 90's, but I stored the actual position as 32 bytes, to see how often a collision occurred. With 64 bits back then, it was extremely rare, maybe once every day of searching (24 hours) or so. Not today however.
I'm curious about the “better” hardware. Mostly for scaling reasons. The biggest and baddest Intel based system I can think of (single machine only – not a cluster) is 8 X E7-8890v2 (15-cores per) running at 3.4Ghz. I'm wondering how well crafty would scale on such a machine or any other chess engine for that matter.bob wrote:Note that on common hardware I can hit 14 billion nodes in reasonable games. IE 1B nodes in 10 seconds. 10B in 100 seconds. And I have run on better hardware where speeds were beyond 150M nodes per second... And this in real chess, not perft which runs quicker.