I'm not sure about the best way to deal with empty entries in the TT.
I currently have them as null objects, but I think someone said a while ago to just store them with depth=0 so I don't need to check for null.
I don't think this is right, when we probe the TT, we have...
And its possible to store a depth 0 when you record a hash entry.
So, if you ignore QS(all depths>=0), you could initialize all TT entries with a depth -1, and I think that would work, since no entries would be null, and entries with depth==-1 would always fail the condition entry.depth>=depth.
I use the same data structure as a lot of people (I think)--two 64-bit integers. One for the hashkey, the other for the data. I just set them both to zero.
Thanks, but I use 1 for the key, and then I have depth,flag,best_move,value,
So in this instance, would setting depth to -1 for empty entry be a good idea to avoid checking null entries since the entry will be discarded upon testing the condition if(entry.depth>=depth) in the PROBE_HASH function?
Colin wrote:Thanks, but I use 1 for the key, and then I have depth,flag,best_move,value,
So in this instance, would setting depth to -1 for empty entry be a good idea to avoid checking null entries since the entry will be discarded upon testing the condition if(entry.depth>=depth) in the PROBE_HASH function?
The best thing for you to do is just test it both ways, and see if your way is faster. I suspcet not, since most processors and cache controllers just grab a full chunk of memory on a read, though the size might depend on the processor. So even if you are just trying to look at a single byte, it might grab a full 64 bit word anyway, so no time would be saved. So I would just write a loop to read a whole bunch of different hash locations (do nto just read on locataion over and over since it will be in the cache and show a fast access when a real hash table is much larger than the cache and so is accessed slower).
There is no need for all this at all. A TT probe for a d=0 result will be satisfied by any depth in the table, so what would you gain by excluding empty entries. After a few seconds 99.999% of your table will be filled anyway.
Probing empty entries will be considered a miss, as their signature will not match the hash key the prober is looking for. Unless that hash key is accidentally zero, but the probability for that is not any larger than the probability for any other key collission on a probe that hit a filled entry.
Draft 0 will always be replaced, so the empty entries will disappear from the table quite quickly.
I hope to put QS back into engine after I got TT working smoothly, in this case I will have negative depths, so perhaps I should just set all the depths to -100 just to keep things simple?
Better to set depth in QS always to 0. (I.e. if your Search() routine is called with a negative depth argument, just make it zero in the beginning of the routine.)
If I use QS, then instead of null entries, instead of setting depth to 0, since QS depths are negative, would it be safer to set the depth in a null entry to something like -100, so that the test..
entry.depth>=depth (IN PROBE_HASH function/method)