Page 4 of 5

Re: TT Key Collisions, Workarounds?

Posted: Fri Aug 19, 2011 3:53 am
by bob
hgm wrote:
bob wrote:There is a secondary issue. If you have just one entry in the table, then yes, you can get a collision, but with every position overwriting that entry, you would have to get a collision on two consecutive probes which is far more unlikely than if you have a bunch of possible entries sitting around to collide with...
I don't think that is true. To get a collisions in any of the entries, you need two consecutive positions mapping to that entry collide, which is exactly as unlikely as for two consecutive probes in general (if the signature does not contain address bits). For any entry it holds that you will only detect collisions with the last position stored there. Not with all positions stored there before.

Only when a probe compares with multiple entries (like 4 or 8), it drives up the colision probability by that same factor. So the bucket size might matter, but not the total hash size.
I don't follow the "two consecutive". You just need two different positions to map to the same zobrist signature, and you need the first one to hang around long enough for the second one to map to the same (wrong) entry to get the collision. The bigger the table, the longer that time window is.

I'm only thinking logically here, not mathematically, so I am not going to claim I have computed the odds and that one way is more or less likely. It just seems more reasonable that the bigger the table the longer entries lay around and the greater the chance of a collision (sort of a variation of the birthday paradox). If you only have one entry in the table, it won't last long and the collisions have to be close temporally or the wrong entry is gone...

In any case, a very few to none per game is what I have seen. Whether that has changed in the last 5+ years since I played with that I don't know. Faster speeds might run the number up a little...

Re: TT Key Collisions, Workarounds?

Posted: Fri Aug 19, 2011 11:24 am
by hgm
Look at itthis way, then: If the table is N times bigger, each of its entries is probed N times les frequently, as the probes are distributed over the entire table. So the time they spend in the table times the frequency with which they are probed, which gives you the number of probes the position receives, and thus the chance for a collision, is independent of N.

But this holds only if the signature stored in the entry does not contain any address bits. If you store the full 64 bits, but 24 of those were also used to calculate the address, you effectively have a signature of only 40 bits, as the match of the address bits is guaranteed. So in that case, increasing the table size,so you need more address bits, shortens the effective signature, which drives up the chances for a collision. But for a fixed length of the non-redundant signature, the hash size has no effect on collision rate. (e.g. if you only store 32 bits.)

Experience with deep perft() calculations

Posted: Fri Aug 19, 2011 12:02 pm
by sje
My experience with deep perft() calculations has shown that an effective signature length of 61 bits is insufficient for perft(11) and beyond. The error rate is small; maybe one false positive every few days. But with perft(), this is not permissible.

I now use an effective signature that's 121 bits long and this has repeatedly exhibited zero false positives. I would expect a false positive about once every few thousand years of run time. Yes, this increases the length of a transposition entry, but it also gives complete peace of mind.

The keys themselves are 128 bits long and are generated during program initialization. They are the same keys each time, built from my own extremely long period PRNG. As these keys are also used for the external opening book file, they must be the same on each run.

For move selection during search, at each node all the moves are generated before the table is probed. Instead of testing the table move for legality, the program scans the generated move list for a match. This is simple code and the time taken is small as the move generation is very, very fast.

Re: TT Key Collisions, Workarounds?

Posted: Fri Aug 19, 2011 12:07 pm
by Sven
hgm wrote:Look at itthis way, then: If the table is N times bigger, each of its entries is probed N times les frequently, as the probes are distributed over the entire table. So the time they spend in the table times the frequency with which they are probed, which gives you the number of probes the position receives, and thus the chance for a collision, is independent of N.

But this holds only if the signature stored in the entry does not contain any address bits. If you store the full 64 bits, but 24 of those were also used to calculate the address, you effectively have a signature of only 40 bits, as the match of the address bits is guaranteed. So in that case, increasing the table size,so you need more address bits, shortens the effective signature, which drives up the chances for a collision. But for a fixed length of the non-redundant signature, the hash size has no effect on collision rate. (e.g. if you only store 32 bits.)
In my opinion it does not matter whether address bits are stored redundantly or not, how many slots per address are used, and how frequently one single entry is probed. You never store two different entries for the same key in the table, so practically for the question of collisions it is only important whether a given key is already present in the table or not.

If I have a hash table containing only one entry then this one corresponds exactly to one hash key. Here I assume that at least all non-address bits of the hash key are stored (otherwise the program has a lower chance of even detecting collisions). Let's say the table is 100% full. If I further say that a new TT lookup comes with a "random" 64 bit key to be searched then the probability for finding that key in the table is 1/(2^64), and since almost all (all except one) chess positions having that key are different from that single one, 1/(2^64) is approx. the probability for a false match.

Now if I double the table size, now having two entries, that probability doubles, too. For a table size of 2^K entries and a 100% full table the probability should therefore be (2^K)/(2^64).

If I set K=24 (which would correlate to 256MB hash and 16 byte entry size, just as an example) for a collision probability of 1/(2^40) then within 2^40 probes (=1.1e12, about 12.7 days of searching with 1 Mnodes/sec if probing would occur at 100% of visited nodes, so realistically much longer) the probability for one or more collisions in that time would be about 0.632.
(EDITED the last number, initially I had (1 - that value). )

It _is_ a variation of the birthday paradox.

Does my calculation make sense to you?

If it does that would also support Bob's view in this matter.

Sven

Re: TT Key Collisions, Workarounds?

Posted: Fri Aug 19, 2011 3:30 pm
by bob
hgm wrote:Look at itthis way, then: If the table is N times bigger, each of its entries is probed N times les frequently, as the probes are distributed over the entire table. So the time they spend in the table times the frequency with which they are probed, which gives you the number of probes the position receives, and thus the chance for a collision, is independent of N.

But this holds only if the signature stored in the entry does not contain any address bits. If you store the full 64 bits, but 24 of those were also used to calculate the address, you effectively have a signature of only 40 bits, as the match of the address bits is guaranteed. So in that case, increasing the table size,so you need more address bits, shortens the effective signature, which drives up the chances for a collision. But for a fixed length of the non-redundant signature, the hash size has no effect on collision rate. (e.g. if you only store 32 bits.)
You've convinced me. As I said, I was thinking logically and did not really do any mathematical analysis.

Re: TT Key Collisions, Workarounds?

Posted: Mon Aug 22, 2011 12:01 am
by lucasart
I've recently fixed the TT collision problem in my program as follows:
1/ use 56 bits from the 64 bit zobrist key in the hash table instead of 32
2/ use a much stronger 64 bit RNG

If you're interested, here is the 64 bit RNG that I am using now

Code: Select all

U64 rand64()
/* Public domain code for JLKISS64 RNG - long period KISS RNG producing 64-bit results.
 * by David Jones: www.cs.ucl.ac.uk/staff/d.jones/GoodPracticeRNG.pdf */
{
	// seed variables
	static uint64_t x = 123456789123ULL, y = 987654321987ULL;
	static uint32_t z1 = 43219876, c1 = 6543217, z2 = 21987643, c2 = 1732654;

	x = 1490024343005336237ULL * x + 123456789;
	y ^= y << 21; y ^= y >> 17; y ^= y << 30;	// do not set y=0

	uint64_t t;
	t = 4294584393ULL * z1 + c1; c1 = t >> 32; z1 = t;
	t = 4246477509ULL * z2 + c2; c2 = t >> 32; z2 = t;

	return x + y + z1 + (&#40;uint64_t&#41;z2 << 32&#41;;	// return 64-bit result
&#125;
by the way, the PDF is very interesting to read!

Re: TT Key Collisions, Workarounds?

Posted: Mon Aug 22, 2011 1:16 pm
by hgm
Sven Schüle wrote:If I set K=24 (which would correlate to 256MB hash and 16 byte entry size, just as an example) for a collision probability of 1/(2^40) then within 2^40 probes (=1.1e12, about 12.7 days of searching with 1 Mnodes/sec if probing would occur at 100% of visited nodes, so realistically much longer) the probability for one or more collisions in that time would be about 0.632.
(EDITED the last number, initially I had (1 - that value). )

It _is_ a variation of the birthday paradox.

Does my calculation make sense to you?
Yes, with 40 bits signature you would have a collission probability of 2^-40 (per comparison). That is what I was saying all along. The signature length determines the probability of a false match.

If you search 4 entries per probe (e.g. when you do shallowest-of-four replacement) the probability becomes 4 times larger.

The table size does not come in, other than that it might affect the effective signature length (i.e. discounting the redundant bits).

Re: TT Key Collisions, Workarounds?

Posted: Mon Aug 22, 2011 3:58 pm
by Sven
hgm wrote:
Sven Schüle wrote:If I set K=24 (which would correlate to 256MB hash and 16 byte entry size, just as an example) for a collision probability of 1/(2^40) then within 2^40 probes (=1.1e12, about 12.7 days of searching with 1 Mnodes/sec if probing would occur at 100% of visited nodes, so realistically much longer) the probability for one or more collisions in that time would be about 0.632.
(EDITED the last number, initially I had (1 - that value). )

It _is_ a variation of the birthday paradox.

Does my calculation make sense to you?
Yes, with 40 bits signature you would have a collission probability of 2^-40 (per comparison). That is what I was saying all along. The signature length determines the probability of a false match.

If you search 4 entries per probe (e.g. when you do shallowest-of-four replacement) the probability becomes 4 times larger.

The table size does not come in, other than that it might affect the effective signature length (i.e. discounting the redundant bits).
Your "yes" in conjunction with what you are writing afterwards is more like a "no" since you disagree to me :-) But I think we are saying the same things, only with different words.

My definitions:
K := number of address bits
S := log(2) of number of slots per address (e.g. S=2 for four slots)
E := log(2) of entry size in bytes (to simplify, let E be an integer)
N := log(2) of table size in bytes
N = K+S+E, i.e. table size = 2^K * 2^S * 2^E

L := stored signature length
L_eff := effective signature length = min(L, 64-K)

Following my previous example of a 256 MB hash table with 16 bytes per entry we would have N=28, E=4 and with S=2 finally K=(28-4-2)=22.


a) The case you are talking about is:
L < 64-K => L_eff = L

b) The case I was talking about is:
L >= 64-K => L_eff = 64-K


a) Your case has no redundant signature bits so in that case it is right that the table size has no effect on the probability of a false match. This is true as long the condition above holds, it becomes false by sufficiently increasing the table size until L_eff = 64-K.

In your case it can occur, in theory, that a matching entry is "found" even though it was initially stored for a different hash key. That happens since some bits are dropped when storing, and can't be used any longer to reject the entry. But that probability is small. For D dropped bits (D := 64-N-K) it is something like (2^D - 1)/(2^64). So the expected case is that this does not occur.

Given that assumption and the equation L_eff = L that I gave above, it is clear that L_eff is independent from N at least, as you said. I have to think about the question whether the probability for a false match depends on S, though. Maybe you are right since my "number of probes" in my previous post is not identical to the number of signature comparisons (we can have 1,2,3, or 4 for 4 slots). OTOH even with 4 comparisons per probe I would either have 0 or 1 false matches. The first match terminates the probe. So I am not sure about it. It is possible that you are right but I did not get to the deciding point yet.

To clear this up it will also be necessary to define which change of parameters we intend to look at:
- increase S (by the programmer) but keep N constant (by the environment), thereby reducing K?
- increase S (by the programmer) and increase N by the same amount as S (by the environment), thereby keeping K constant?


b) My case has either zero or more redundant signature bits but no "dropped bits" (D=0). K directly depends on both N and S, and we have:
L_eff = 64-(N-S-E)
so here the table size has an influence.


Could you please verify my theory, and try to explain your theory about case a) based on the terms I introduced above so that we are talking about the same thing? That would be helpful.

Sven

Re: TT Key Collisions, Workarounds?

Posted: Mon Aug 22, 2011 8:12 pm
by hgm
Yes, I think we are in total agreement. About the 0 or 1 matches: you lose nothing by stopping at the first match, because no two entries in the bucket can hold the same signature. Because one of them would then have to be written at the time the other one was already present. And then the collision would already take place at that moment, and you would have overwritten. So if an entry with the same signature as you are looking for is present in the bucket, it will always be found. There is no chance it will be 'screened' by another entry.

Note this whole discussion originated from the question if a signature of 10 bits could in practice be better than 32 or 64 bits, because the detrimental effect of the collisions is more than offset by the higher packing density, and thus larger size (in entries) of the hash table. With only 10 bit signature, I suppose it is a safe assumption that many bits will be dropped from the key. A 32-bit key will still not be enough (it would leave only 22 address bits), and a key between 32 and 64 bit would not be any faster than a 64 bit key, so I do assume a 64-bit total key length.

Re: TT Key Collisions, Workarounds?

Posted: Mon Aug 22, 2011 8:36 pm
by Sven
hgm wrote:Yes, I think we are in total agreement.
That sentence made my day :-) :-)

Cheers,
Sven