how to measure frequency of hash collisions.
Moderators: bob, hgm, Harvey Williamson
Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.

 Posts: 3818
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia
 Contact:
how to measure frequency of hash collisions.
Is there a way to directly measure the number of hash collisions (not its effect). I was thinking of storing the fen along with the hash key in transposition table entry, so that if the signatures match but the fens don't it would be collision. Is there a better way ?
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
Re: how to measure frequency of hash collisions.
You can get a good estimate by measuring the number of nearcollisions.Daniel Shawul wrote:Is there a way to directly measure the number of hash collisions (not its effect). I was thinking of storing the fen along with the hash key in transposition table entry, so that if the signatures match but the fens don't it would be collision. Is there a better way ?
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
For example, each time you have a miss, count how many of the leftmost bits match (xor and look for the highest bit set, call that index 'b'). Then do count++. Then make a histogram with b on one axis, and log(count) on the other. You should get a linear plot once you have enough measurements. The number of false hits would be approximately count[0]/2.
There are variants on this that are essentially the same (instead of xor use substraction, or count hamming distance, or count rightmost match etc)

 Posts: 3818
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia
 Contact:
Some tests: Attn Don
marcelk wrote:You can get a good estimate by measuring the number of nearcollisions.Daniel Shawul wrote:Is there a way to directly measure the number of hash collisions (not its effect). I was thinking of storing the fen along with the hash key in transposition table entry, so that if the signatures match but the fens don't it would be collision. Is there a better way ?
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
For example, each time you have a miss, count how many of the leftmost bits match (xor and look for the highest bit set, call that index 'b'). Then do count++. Then make a histogram with b on one axis, and log(count) on the other. You should get a linear plot once you have enough measurements. The number of false hits would be approximately count[0]/2.
There are variants on this that are essentially the same (instead of xor use substraction, or count hamming distance, or count rightmost match etc)
Thanks for the suggestion. I think that will be a good way to measure for PRNG generated hash keys when collisions are very rare. Because of that it may better to test for similarity of keys...
I did a quick test over the forumulas we have been examining and the collision rate is embarassing :(The keys differ in a subtle way and they look good visually but there are many collisions.
FNV1 => 25000 collisions/sec
FNV1a => 15 collisions/sec
The one I suggested with a rotate & multiply => 1600 collisions/sec !!
Code: Select all
const U64 C1 = U64(0x109951162821a737);
U64 h = U64(0xeadf018b615d3be8);
h = (h << sq) ^ (h >> (64  sq));
h *= C1;
h = (h << cp) ^ (h >> (64  cp));
return h;
Code: Select all
h += C1
Visual inspection is definately misleading. And the power of additions is revealed again
More later

 Posts: 3818
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia
 Contact:
Re: Some tests: Attn Don
I tested separate tables for [square] and [piece] and then multiply or add, they both gave 0 collisions / sec after 200 million positions. The rotating code which does h += C1 code also gave 0 collisions after same number of simulation, but if C1 is shorter it can give few 12 collisions per second but using multiplication there is very bad surprizingly. I guess the rotate by square & cp with multiplication are somehow related. I am trying many formulas now, deliberately modifying some of them to give overlapping signatures, and it spots them right away with an astounding 20000 collisions / second... I think it would have been much easier to devise a hash function with this tool in hand. If you get a 0 collision after a few seconds it probably means the hash function is good enough.
What I do is to randomly select number of pieces and their placement (no overlap) and calculate two hashkeys and store it in TT. Whenever I do that I check for collisions with the stored position before replacing and it works pretty well. It doesn't require a chess engine or perft to test that. I don't know if I should limit the domain to similar positions or so to catch collisions faster but this works pretty well so far.
What I do is to randomly select number of pieces and their placement (no overlap) and calculate two hashkeys and store it in TT. Whenever I do that I check for collisions with the stored position before replacing and it works pretty well. It doesn't require a chess engine or perft to test that. I don't know if I should limit the domain to similar positions or so to catch collisions faster but this works pretty well so far.
Re: how to measure frequency of hash collisions.
You can estimate the collision rate by using N bits are for checking. So if your key is 64 bits, pretend it's only 60 bits and 4 bits are for collision testing. If the 4 bits do not match it was a collision. You can extrapolate to get the 64 bit collision rate estimate  each time you add a bit you can expect half the number of collisions.Daniel Shawul wrote:Is there a way to directly measure the number of hash collisions (not its effect). I was thinking of storing the fen along with the hash key in transposition table entry, so that if the signatures match but the fens don't it would be collision. Is there a better way ?
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
It's arbitrary how many bits to use for the the collistion test, the more you use the less often it will be wrong. If you use just 1 bit it will be wrong a lot, if you use 4 it will be wrong 1/16 times on average. Note that it will never return false negatives, if it claims a collision there IS a collision, but it will occasionally say there is no collision when there is since it's only accurate to the 64 bit resolution of your full key.
Don
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
Re: how to measure frequency of hash collisions.
3 things I have done in the past.Daniel Shawul wrote:Is there a way to directly measure the number of hash collisions (not its effect). I was thinking of storing the fen along with the hash key in transposition table entry, so that if the signatures match but the fens don't it would be collision. Is there a better way ?
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
1. Create an extra byte (mini hashkey) in the TT serving as a collision reporter.
2. Storing the position also as extra test.
3. For creating a 64 bit hashkey I use 2 different 32bit randomizing routines. One routine via the standard PC (srand + rand) and another one I found on the net.
Even with a 48bit hashkey I hardly got collisions, but it's a long time ago. Perhaps the magic is in (3) using different randomizer code for part 1 and 2 of a 32bit hash key. After generation combine them to U64.

 Posts: 3818
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia
 Contact:
Re: how to measure frequency of hash collisions.
Yes that seems to be the case. I think (3) is the best. If two different hash functions happen to collide (give same 64 bit hash signatures) at the same position, it is a miracle infact miracle multiplied by miracle. It is very rare to get a hash collision for two different positions using a decent hash function in the first place. So chances two different hash functions colliding at the same position is zero. For my test, I used the same random number generator but stored two sequences primary & secondary hash keys as a quick hack.Rebel wrote:3 things I have done in the past.Daniel Shawul wrote:Is there a way to directly measure the number of hash collisions (not its effect). I was thinking of storing the fen along with the hash key in transposition table entry, so that if the signatures match but the fens don't it would be collision. Is there a better way ?
Edit: Right after I write this I realized storing _two_ hash keys is better than this (one could be a string hash of the fen). If the primary keys match but the second one don't it is a collision.
1. Create an extra byte (mini hashkey) in the TT serving as a collision reporter.
2. Storing the position also as extra test.
3. For creating a 64 bit hashkey I use 2 different 32bit randomizing routines. One routine via the standard PC (srand + rand) and another one I found on the net.
Even with a 48bit hashkey I hardly got collisions, but it's a long time ago. Perhaps the magic is in (3) using different randomizer code for part 1 and 2 of a 32bit hash key. After generation combine them to U64.
For those interested to try out their hash functions for collisions, here is the code. There could be a bug though.
Code: Select all
#include <stdio.h>
#include <time.h>
//U64
#ifdef _MSC_VER
typedef unsigned __int64 U64;
typedef unsigned int U32;
# define U64(x) (x##ui64)
# define FMTU64 "0x%016I64x"
#else
# include <inttypes.h>
typedef uint64_t U64;
typedef uint32_t U32;
# define U64(x) (x##ull)
# define FMTU64 "0x%016llx"
#endif
#define unitBB(x) ((U64)1 << (x))
//PRNG
#define MY_RAND_MAX 0x7fff
struct PRNG {
U32 randn;
void seed(int sd) {
randn = sd;
}
U32 rand() {
randn *= 214013;
randn += 2531011;
return ((randn >> 16) & MY_RAND_MAX);
}
U64 rand64() {
return((U64)rand()) ^
((U64)rand() << 15) ^ ((U64)rand() << 30) ^
((U64)rand() << 45) ^ ((U64)rand() << 60);
}
};
//TT entry
struct ENTRY {
U64 key1;
U64 key2;
U32 stores;
};
//pieces
int piece[32];
int square[32];
int npieces;
U64 secondary_hkey[14][64];
U64 cp_key[14];
U64 sq_key[64];
ENTRY* table;
//put your hash function here
U64 hashf(int cp,int sq) {
/*/
const U64 C1 = U64(0xeadf018b615d3be8);
const U64 C2 = U64(0x109951162821a737);
U64 h = C1;
h = (h << sq) ^ (h >> (64  sq));
h += C2;
h = (h << cp) ^ (h >> (64  cp));
return h;
/*
const U64 FNV = U64(1099511628211);
U64 h = U64(19282138887198);
h = h * FNV;
h = h ^ cp;
h = h * FNV;
h = h ^ sq;
return h;
/*
return cp_key[cp] * sq_key[sq];
/*/
return cp_key[cp] + sq_key[sq];
//*/
}
//primary key
U64 get_key1() {
U64 key = 0;
for(int i = 0;i < npieces;i++)
key ^= hashf(piece[i],square[i]);
return key;
}
//secondary key to compare to
U64 get_key2() {
U64 key = 0;
for(int i = 0;i < npieces;i++)
key ^= secondary_hkey[piece[i]][square[i]];
return key;
}
void main() {
const int TT_SIZE = 1 << 16;
const unsigned int TT_MASK = TT_SIZE  1;
const int PIECES_LIMIT = 32; //reduce this for faster tests ( endgame positions ).
//table
table = new ENTRY[1 << 16];
//secondary hashkeys
PRNG prng;
prng.seed(0);
for(int i = 0;i < 14;i++)
for(int j = 0;j < 64;j++)
secondary_hkey[i][j] = prng.rand64();
for(int i = 0;i < 14;i++)
cp_key[i] = prng.rand64();
for(int i = 0;i < 64;i++)
sq_key[i] = prng.rand64();
//test for collisions of primary hash keys
U64 all,key1,key2,bb;
U32 index;
U32 collisions = 0;
int sq,millions = 0,j = 0;
ENTRY* pentry;
clock_t start,end;
start = clock();
while(true) {
j++;
//set up random position
all = 0;
npieces = (prng.rand() % PIECES_LIMIT) + 1;
for(int i = 0;i < npieces;i++) {
piece[i] = (prng.rand() % 12);
do {
sq = (prng.rand() % 64);
bb = unitBB(sq);
} while((all & bb));
square[i] = sq;
all = bb;
}
//get the two keys & store them
key1 = get_key1();
key2 = get_key2();
index = U32(key1 & TT_MASK);
pentry = &table[index];
if(pentry>key1 == key1) {
if(pentry>key2 != key2) {
collisions++;
}
} else {
pentry>key1 = key1;
pentry>key2 = key2;
pentry>stores++;
}
//display stat
if((j % 1000000) == 0) {
end = clock();
int time = (end  start) / 1000;
printf("Time %ds Positions %d\n",time,j);
printf(" Collisions = %d. Rate = %.2f collisions per sec.\n",
collisions,collisions / double(time));
millions++;
if(millions == 100) break;
}
}
/*
FILE* log = fopen("test.txt","w");
for(int i = 0;i < TT_SIZE;i++) {
fprintf(log,"%d\n",table[i].stores);
}
fclose(log);
*/
/*
for(int i = 0;i < 64;i++)
printf(FMTU64"\n",hashf(i,5));
*/
}
//*/

 Posts: 3818
 Joined: Tue Mar 14, 2006 10:34 am
 Location: Ethiopia
 Contact:
Re: how to measure frequency of hash collisions.
But that won't work because in a hash collision, the hash signature (all 64 bits) are the same for two completely different positions... You need a key from another sequence of random numbers (be it from the same or different hash function). Am I missing something ?Don wrote: You can estimate the collision rate by using N bits are for checking. So if your key is 64 bits, pretend it's only 60 bits and 4 bits are for collision testing. If the 4 bits do not match it was a collision. You can extrapolate to get the 64 bit collision rate estimate  each time you add a bit you can expect half the number of collisions.
Don
 hgm
 Posts: 23959
 Joined: Fri Mar 10, 2006 9:06 am
 Location: Amsterdam
 Full name: H G Muller
 Contact:
Re: how to measure frequency of hash collisions.
I was just asking myself the same question. I wrote a program to generate random positions (assign random squares and piece types to a number of piees, taking care of that no square is used twice), and compared the number of collisions I got with truly random Zobrist keys, and keys derived with your pieceKey+squareKey idea. I checked with a second key, but that wasn't really needed, because all primary key matches were indeed collisions, as the chances to randomly generate the same position twice are about zero. I could not detect significant difference in collision frequency between the random key and the composite key.Daniel Shawul wrote:Is there a way to directly measure the number of hash collisions (not its effect).
I don't think this is a decisive test, though. What is important is how many collisions you get when you have a set of very similar positions, as occur in a search tree.
A method I used in the past to test the quality of a set of Zobrist keys is is to count the number of 'dependency loops': subsets of N different keys that XOR to zero. For a given key length and a much larger total number of keys there is a certain N beyond which such cycles cannot be avoided. The quality of the set of keys is determined by whether you already have cycles of lower N (i.e. an unnecessary dependency between a small number of keys).
E.g. with 768 keys you have 768*767*766/6 = 75M different 'products' (XORs, actually) of 3 keys, which conceivably could all be different if the key length is 27 bit (75M < 2^27). This means that you can have key sets of 27 bit where no product of 6 keys can be zero. You have 14G different products of 4 keys. So with 32bit keys there must be duplicats, meaning there must be products of 8 keys that are zero. Keys of realistic length (such as 64 bit) are hard to test, because collisions are extremely rare. So it is best to test model systems of short key length, where collisions are frequent, and you can efficiently detect them by storing the complete key space in a table (so you can easily detect how often a product of N keys is repeated). Cycles of odd length (say 7) can be detected by putting all products of 3 keys in a hash table, and then checking all products of 4 keys against that table, to see if they occur there.
Note that is not yet the full story: in practice a vary bad dependency (like 3 keys XORing to 0) could be completely hidden by putting the bad keys in the table for a piece type that only occurs once (like King).
Re: how to measure frequency of hash collisions.
Yes, what you are missing is that you are not testing the 64 bit key. You are testing a 60 bit key. In my example just pretend that you are generating a 60 bit key and then a totally independent 4 bit key. Let's say for example that the you get a collision on the 60 bit key once out of 1 million matches (as judged by the 4 bit verify key.) You can expect that had this been a 64 bit key instead of 60 bits you will get only 1/16 as many collisions since 2^4 is 16. Each extra bit cuts the number of expected collisions in half.Daniel Shawul wrote:But that won't work because in a hash collision, the hash signature (all 64 bits) are the same for two completely different positions... You need a key from another sequence of random numbers (be it from the same or different hash function). Am I missing something ?Don wrote: You can estimate the collision rate by using N bits are for checking. So if your key is 64 bits, pretend it's only 60 bits and 4 bits are for collision testing. If the 4 bits do not match it was a collision. You can extrapolate to get the 64 bit collision rate estimate  each time you add a bit you can expect half the number of collisions.
Don
This is superior to any of the suggested methods proposed because it does not require modification of the program in order to add more key space, you just borrow a few bits from the already existing key. So this could be added to the program with no additional overhead. In fact the program could still continue to use the full 64 bit key while doing the 60 bit key collision detection test for statistical purposes and it could even be put in the log file of the program. If you use 4 bits for collision detection then you count how many of these 60 bit keys collided and divide by 16 to get an ESTIMATE of how often a 64 bit key would have collided.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.