A suitable quantitative measure of the quality of a set of keys could be the number of 6-key dependencies you could detect on several (randomly picked, but then kept fixed) subsets of the bits, chosen to have sufficiently few bits that you actually expect a fair number of such dependencies.
The 6-key dependencies are efficiently detected by storing all combinations of 3 different keys in a hash table. If two combinations map to the same entry, you found a 6-key dependency. For a set of 800 keys there are 800^3/6! = 21M 3-key combinations (170MB). According to the birthday paradox that makes 220e12 collision attempts. So you should start to see collisions when you consider 48-bit sub-sets of the key, and get a fair number when you consider 44-bit subsets.
On-the fly hash key generation?
Moderators: hgm, Rebel, chrisw
-
- Posts: 27808
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
-
- Posts: 2559
- Joined: Fri Nov 26, 2010 2:00 pm
- Location: Czech Republic
- Full name: Martin Sedlak
Re: On-the fly hash key generation?
That's an awesome idea!Aleks Peshkov wrote:n << sq | n >> (64-sq);
Rotations will be folded into just one instruction on x64, cutting down memory requirement to practically a negligible amount (nearly by a factor of 64).
Does it work well in your engine compared to traditional full tables?
I can imagine it might work very well.
-
- Posts: 892
- Joined: Sun Nov 19, 2006 9:16 pm
- Location: Russia
Re: On-the fly hash key generation?
We are talking about micro-optimizations. I could not measure the speed difference between rotation and full table version.mar wrote:That's an awesome idea!Aleks Peshkov wrote:n << sq | n >> (64-sq);
Rotations will be folded into just one instruction on x64, cutting down memory requirement to practically a negligible amount (nearly by a factor of 64).
Does it work well in your engine compared to traditional full tables?
I can imagine it might work very well.
I wanted to skip random factor from Zobrist keys and hoped to find better keys. The set of just 7 numbers is much easier to optimize.
-
- Posts: 892
- Joined: Sun Nov 19, 2006 9:16 pm
- Location: Russia
Re: On-the fly hash key generation?
I do not understand what should I do and what to do with the result.hgm wrote:A suitable quantitative measure of the quality of a set of keys could be the number of 6-key dependencies you could detect on several (randomly picked, but then kept fixed) subsets of the bits, chosen to have sufficiently few bits that you actually expect a fair number of such dependencies.
The 6-key dependencies are efficiently detected by storing all combinations of 3 different keys in a hash table. If two combinations map to the same entry, you found a 6-key dependency. For a set of 800 keys there are 800^3/6! = 21M 3-key combinations (170MB). According to the birthday paradox that makes 220e12 collision attempts. So you should start to see collisions when you consider 48-bit sub-sets of the key, and get a fair number when you consider 44-bit subsets.
I am optimizing chess space, so some key combinations are impossible or very rare in practice. For example, there only 2 kings in the chess game, on most of time the are always on home ranks. Collisions with castling keys are less important because they can happen only between positions before and after castling move and so on.
For example I have read here that 5 average 4-key 32-bit dependencies is normal for random set. My way of generation keys would mean either none or millions of such collisions would happen for such condition. So I should create a set without 32-bit collisions, but what to do after that? I can try to test 5-key sets, but I am not sure that 0 collisions is theoretically possible for 32-bit subkey. And if it is not possible for full space, probably it is possible to do in chess subspace...
-
- Posts: 931
- Joined: Tue Mar 09, 2010 3:46 pm
- Location: New York
- Full name: Álvaro Begué (RuyDos)
Re: On-the fly hash key generation?
To be clear, I only looked at this 4-key collision number as simple detector of glaring defects in a hashing scheme.Aleks Peshkov wrote:For example I have read here that 5 average 4-key 32-bit dependencies is normal for random set. My way of generation keys would mean either none or millions of such collisions would happen for such condition. So I should create a set without 32-bit collisions, but what to do after that? I can try to test 5-key sets, but I am not sure that 0 collisions is theoretically possible for 32-bit subkey. And if it is not possible for full space, probably it is possible to do in chess subspace...
You can use full-size tables and initialize them with this procedure:I wanted to skip random factor from Zobrist keys and hoped to find better keys. The set of just 7 numbers is much easier to optimize.
Code: Select all
u64 h = 1;
for (int i = 0; i < 64*12; ++i) {
zobrist_table[i] = h;
h <<= 1;
if (h >> 63)
h ^= 0x42F0E1EBA9EA3693ull;
}
NOTE: I would probably use a random constant as initial value for h instead of 1, to avoid some unfortunate properties of taking a subset of the bits (for instance to compute an entry in the hash table).