Two hash functions for distributed transposition table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Two hash functions for distributed transposition table

Post by Daniel Shawul »

I wanted to test distributed transposition table in a message passing environement. The idea is to share only the upper parts of the tree, say > depth=10. I didn't want to have separate tables for parts that are shared and not. We do a modulo to get the processor that holds the TT entry and then modify the hashkey accordingly. So basically a different hash function is used for the shared part of the table.
For depth >= 10

Code: Select all

processor_number = hashkey % n_processors;
hashkey = hashkey / n_processors
Ger(processor_number, hashkey)
For depth < 10, it is simply

Code: Select all

Get&#40;hashkey&#41;
Do you see potential problems with using same tables ? In the depth preferred scheme, the shared part of the TT should survive longer.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Two hash functions for distributed transposition table

Post by hgm »

Modulo is an expensive operation. It might be faster to do something like

processor_number = ((unsigned)hashkey >> 48) * n_processors >> 16;

There probably is no need to modify the hashkey itself, when you derive the processor number from the bits that otherwise only determine the signature (and not the table index). It just means that some signatures would not occur in some local tables.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Two hash functions for distributed transposition table

Post by Daniel Shawul »

That is a nice trick of doing the modulo with shift and multiply. Avoiding the operation seems to speed up the distributed hash look up on NUMA systems a bit. Also using the same hash-key doesn't seem to cause a problem, so thanks!