Zobrist keys - measure of quality?
Posted: Tue Feb 24, 2015 9:32 am
I always thought that one can't do better than using "truly" random keys (or at least using statistically good PRNG - read passing various statistical tests).
However as Steven showed recently (while working on Oscar) - even PRNG that fails some statistical tests is good enough for zobrist keys that just work.
So I tried to mesasure something else - given any two features (=keys) I measured maximum number of same bits.
In my case, I have 1049 keys (yes I know it's much more than necessary) and for my PRNG it was 50, for random numbers I got from random.org I got 49.
The average is obviously very close to 32 (half of my key size).
Then I used a brute force loop to try to improve and managed to reduce this so that no two features have more than 42 common bits,
with the idea that this should (in theory) lower the possibility of cancellation,
but one would have to conduct collision tests if this is indeed true (which I haven't).
However as Steven showed recently (while working on Oscar) - even PRNG that fails some statistical tests is good enough for zobrist keys that just work.
So I tried to mesasure something else - given any two features (=keys) I measured maximum number of same bits.
In my case, I have 1049 keys (yes I know it's much more than necessary) and for my PRNG it was 50, for random numbers I got from random.org I got 49.
The average is obviously very close to 32 (half of my key size).
Then I used a brute force loop to try to improve and managed to reduce this so that no two features have more than 42 common bits,
with the idea that this should (in theory) lower the possibility of cancellation,
but one would have to conduct collision tests if this is indeed true (which I haven't).