Question about parallel search and race conditions

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Question about parallel search and race conditions

Post by Michael Sherwin »

If while reading a hash entry and another thread is writing to that hash entry corrupted data might get returned. (t or f)? Assumption, true.

If while writing a hash entry and another thread is also writing to that hash entry then the data in the hash entry may be corrupted. (t or f)? Assumption, true.

Mutex (spinlock) is slow. (t or f)? Assumption, true.

Question: Instead of using mutex;
1. What if on reads or writes the read is read twice and if the reads are identical is the read safe? And on writes if read back and it is the same is it safe?

2. Or can a checksum approach work? Say xor the entry fields together and store the checksum in the hash entry.

3. Would 1 and/or 2 above be faster or slower than using mutex?

Edit: On a write race condition I see that two threads can both end up in an endless loop. Soulution? The lower numbered thread gives up first.

Thanks!
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
User avatar
hgm
Posts: 27808
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Question about parallel search and race conditions

Post by hgm »

I think all three are true.

The checksum would work, and be much cheaper than mutex speedwise. Storing a separate checksum in each entry is expensive spacewise, however. Bob has thought of a very clever solution to that: use the hash key (or at least the part of it stored in the entry as signature) as check sum. Then corrupted entries, who have a faulty check sum, would also have a corrupted signature, and be ignored for that reason. To implement this, you can XOR the 'other' word of the entry to the word containing the hash key before you write, and do it again after you read. If the second word is still the same (i.e. the entry was not corrupted) the two XORs cancel each other, and the signature is still correct. If the word you read is different from the word that was written (corrupted entry), you will not reconstruct the original signature, and the probe will be counted as a miss.

The most efficient way is to just ignore the problem, though. You will have to prepare for false hits due to key collissions anyway. Occasional fals info due to SMP corruption hardly adds anything to that; it is very rare that 2 threads would try to write the same entry at exactly the same time.
DustyMonkey
Posts: 61
Joined: Wed Feb 19, 2014 10:11 pm

Re: Question about parallel search and race conditions

Post by DustyMonkey »

(1) Depends how large your hash entry's are. 64-bit (8 byte) operations are atomic on x64, and there is also an atomic 128-bit operation (cmpxcchg16b) but using it effectively requires care as well as looping.

(2) true

(3) a spinlock can be pretty fast - if you are trying to lock your entire table then there will be a lot of spinning, but if you are only locking small sections at a time then there will rarely be spinning. The less spinning you want, the more memory you will need to dedicate to your locks, however. I believe some engines use 10 byte hash entries and each group of 6 entries is thus 60 bytes and to maintain cache alignment 4 bytes are used for additional information such as if these 6 entries are locked or not.

questions:

(a) not safe - the time delta between writing different parts of a hash entry can be arbitrarily large and its out of the engines control

(b) just use a part of the hash key that doesnt determine the index into the table - all those topmost bits if using zobrist, for example, are just as good a signature as the ones you used to generate the index. Other fast hashing schemes often have poor topmost bit distribution so keep that in mind.

(c) they will have different false negative rates so comparing their speeds isnt really appropriate without knowing those rates. As pointed out in an earlier post, just ignoring the possibility is the fastest of all, but has the highest possible false negative rate.