Page 1 of 1

Two hash functions for distributed transposition table

Posted: Wed Dec 17, 2014 12:43 am
by Daniel Shawul
I wanted to test distributed transposition table in a message passing environement. The idea is to share only the upper parts of the tree, say > depth=10. I didn't want to have separate tables for parts that are shared and not. We do a modulo to get the processor that holds the TT entry and then modify the hashkey accordingly. So basically a different hash function is used for the shared part of the table.
For depth >= 10

Code: Select all

processor_number = hashkey % n_processors;
hashkey = hashkey / n_processors
Ger(processor_number, hashkey)
For depth < 10, it is simply

Code: Select all

Get&#40;hashkey&#41;
Do you see potential problems with using same tables ? In the depth preferred scheme, the shared part of the TT should survive longer.

Re: Two hash functions for distributed transposition table

Posted: Wed Dec 17, 2014 5:13 pm
by hgm
Modulo is an expensive operation. It might be faster to do something like

processor_number = ((unsigned)hashkey >> 48) * n_processors >> 16;

There probably is no need to modify the hashkey itself, when you derive the processor number from the bits that otherwise only determine the signature (and not the table index). It just means that some signatures would not occur in some local tables.

Re: Two hash functions for distributed transposition table

Posted: Wed Dec 17, 2014 10:28 pm
by Daniel Shawul
That is a nice trick of doing the modulo with shift and multiply. Avoiding the operation seems to speed up the distributed hash look up on NUMA systems a bit. Also using the same hash-key doesn't seem to cause a problem, so thanks!