a "huge" array of key. Maybe is not very huge but, more for my fun than for effective needing, i've searched a way to reduce this array size,
keeping the same validity of the keys.
In C, we access this array with something like this:
has reported in this link:
http://web.archive.org/web/200708222040 ... obrist.htm
Maybe somebody uses a different approach, but, at end, he/she always have an array of 64 bit values.
As said in another thread, assembly programmers look at things in another way... so i've looked at the key array as a sequence of 64 bit values, despite from the index that we use. We should get the index, somehow, of course, but looking at the array as a sequence of 64 bit values will let us see the whole array as... a flow of bits, ideally separated any 64 of them:
1001...11010 1010...1011 11...01011
In C we access this array at any 64 bit boundary. But why we need this 64 bit boundary access? If we don't care about processor data alignment, we can access this array at any lower boundary, without loosing the "randomness" of the keys! In fact, because any bit is important only for its position and the xor that we use to build the position key acts in a bit per bit way, without to influence any other bit (as, for sample, a sum will instead does, because of carry flag) then we can access the Zobrist random key array with any bit boundary, less or equal to 64! I think (or well i hope) that it can be mathematically proved that the randomness of the generated keys does'nt will be affected by the boundary that we choose.
So, we can access this array on any byte or word or dword boundary and even bit boundary. That means that we can use a very short Zobrist key array, just accessing that array with bit boundary. What does it means in practice? In the sample above, the index to the 64 bit value that we're searching for becomes an index to the first bit of the 64 bits needed. Obviously this gives us the shortest possible array but it is hard to extract the bits we needs (we should access 64+8 bits and then shift the result according to the bit index). An easiest way is to use a byte boundary, with something like this:
U64 GetZobristKey(int piece, int color, int square)
int index = piece*color*square;
const char *p = ((char *)zobrist) + index;
return *(U64 *)p;
In the latter, our array does'nt need:
(piece*color*square * 64) bits
as the original and well known implementation but only:
(piece*color*square * 8 + 3*8) bits
This will reduce our array of about 8th times in size.
If we use a bit boundary, the array becomes
(piece*color*square + 63) bits
One thing that i can't prove, for now, is if really the randomness of the keys will be preserved or if the generated keys are instead somehow "degenerated". If one can prove that the randomness is the same and the final postion keys collision is the same, then we have a way to use a very short Zobrist random keys array. This can help to reduce CPU cache usage of our programs (at the cost of data misalignment and code complexity!) or maybe is helpless at all... but for handled devices, with low memory available, it could be still usefull.
What do you think about this idea? I'm really crazy?
Code: Select all
Code: Select all