But the deeper threads will not be just writing 800 moves. They will be be constantly writing. So the probability that a valid high depth entry gets overwritten is higher than what you imply.Not really. Let's say you have 16 threads and they're searching 50 moves each, which seems unlikely but let's consider this worst-case scenario for the sake of argument. So you have a total of 800 moves that you have to store in your "currently-searching" hash table. If your hash table is 2^20 entries, it'll only be 0.07% full. The odds of a hash collision when inserting a new move are 1 in 1310. So it doesn't seem like something you'd have to worry about too much. But if you are worried about collisions, it would be trivial to change the hash table code so that nothing ever gets overwritten. I think that would add 10-15 lines of code to the pseudocode that I've posted. But again, I don't think this is a problem that needs to be solved. The worst thing that happens if there's a collision is that multiple threads search the same move at the same time, which just means that the move gets searched faster.
This is easily fixable by using linear probing: do not overwrite but search for a free entry. Since the load of the hashtable is low, linear probing would give little overhead.