My understanding was he _changed_ to storing depth-R, which is wrong. You want to store depth. He changed back to depth but claimed it slowed him down. Which it should, because storing depth-R will make hash probes succeed more often than they should, giving bogus results...Greg Strong wrote:Unless I'm missing something, (which I probably am), you've got it backwards. He's storing the hash table entry with a lower draft that he should, and this he wouldn't be "trusting that result when you should not", he would be distrusting it when he should.bob wrote:Since you initially probe with a depth of DEPTH, you _must_ store draft=DEPTH so that you avoid the work the next time around... Any other value of draft will hurt. Yes, storing DEPTH-R will make your program faster, because you will be trusting that result when you should not, since you will be probing with a value of "DEPTH" and that will most certainly be >= DEPTH-R more frequently than it will be >= DEPTH. but it would be wrong in most of those cases.
If the depth is 9, and he stores the bound with depth - R, (let's say 6,) then when the position is encountered again and the depth is 7, it will be untrusted and the null move played again ...
I suppose there is a chance that this is actually good because playing it again will update the hash table entries which might help in the long run (although it seems unlikely.)
Null-move pruning and the hash table
Moderators: hgm, Rebel, chrisw
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
-
- Posts: 388
- Joined: Sun Dec 21, 2008 6:57 pm
- Location: Washington, DC
Re: Null-move pruning and the hash table
Again, unless I'm missing something, storing depth-R means storing a LOWER depth, meaning that it has less value and will be used less often... So, it's the opposite of bogus results; it's really ignoring an otherwise good hash hit when the probe depth is between depth and depth-R.bob wrote:My understanding was he _changed_ to storing depth-R, which is wrong. You want to store depth. He changed back to depth but claimed it slowed him down. Which it should, because storing depth-R will make hash probes succeed more often than they should, giving bogus results...
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
You are exactly right, the two discussions reversed my thinking from what really happens. Why storing depth-R gives better results must mean that he is not hashing correctly so that the hash probe failure caused by the wrong depth being stored earlier is somehow better than a quick hash cutoff... How that could be broken is difficult to imagine.Greg Strong wrote:Again, unless I'm missing something, storing depth-R means storing a LOWER depth, meaning that it has less value and will be used less often... So, it's the opposite of bogus results; it's really ignoring an otherwise good hash hit when the probe depth is between depth and depth-R.bob wrote:My understanding was he _changed_ to storing depth-R, which is wrong. You want to store depth. He changed back to depth but claimed it slowed him down. Which it should, because storing depth-R will make hash probes succeed more often than they should, giving bogus results...
Re: Null-move pruning and the hash table
Bob is very interesting in things may be worth 1 elo.
-
- Posts: 27809
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Null-move pruning and the hash table
It is tricky drawing conclusions like that without knowing the replacement algorithm. If the result is genuine, it could be due to the hashed info being comparatively useless (it will not give you a move, and saves you much less work than an entry that was obtaned from searching real moves, as the null-move search would be reduced when you had to redo it).
So it could be that storing it at d=DEPTH floods the hash table with null-move cutoffs, replacing d=DEPTH-1 ... DEPTH-R entries, which represented much more search effort. Artificially lowering the depth would make the position much less successful in conquering and hanging on to hash table entries.
This of course would be still more efficienty solved by taking the fact that the cutoff was caused by a null move into account in the replacement decision, and then storing with the true depth, so that you won't needlessly disrecard any hts on it. To be free of replacement effects, and purely measure the effect of storing too low a depth on the hit rate, one should really test this with a replace-always hash table.
So it could be that storing it at d=DEPTH floods the hash table with null-move cutoffs, replacing d=DEPTH-1 ... DEPTH-R entries, which represented much more search effort. Artificially lowering the depth would make the position much less successful in conquering and hanging on to hash table entries.
This of course would be still more efficienty solved by taking the fact that the cutoff was caused by a null move into account in the replacement decision, and then storing with the true depth, so that you won't needlessly disrecard any hts on it. To be free of replacement effects, and purely measure the effect of storing too low a depth on the hit rate, one should really test this with a replace-always hash table.
-
- Posts: 155
- Joined: Mon Feb 15, 2010 9:33 am
- Location: New Zealand
Re: Null-move pruning and the hash table
I gave up hashing with d = depth - R, because it is illogical.
I also gave up hashing with d = depth, because it invariably puts my node count up. Here's an extreme, but otherwise typical, example.Such results are not much affected by switching to an always-replace TT policy.
To conclude, in my program the TT seems OK, NMP seems OK, but hashing null-move cutoffs wastes time, for reasons that remain obscure, instead of saving it.
Robert P.
I also gave up hashing with d = depth, because it invariably puts my node count up. Here's an extreme, but otherwise typical, example.
Code: Select all
hash_null_move_cutoffs = NO
Hash table statistics:
3.5E+6 probes
1.7E+6 hits (score used) 48%
8.3E+5 hits (move tried) 23%
9.8E+5 misses 28%
Null-move pruning:
3.3E+5 null-moves
3.0E+5 cutoffs 90%
hash_null_move_cutoffs = YES
Hash table statistics:
1.5E+7 probes
7.7E+6 hits (score used) 50%
2.9E+6 hits (move tried) 18%
4.7E+6 misses 30%
Null-move pruning
9.0E+5 null-moves
8.2E+5 cutoffs 91%
To conclude, in my program the TT seems OK, NMP seems OK, but hashing null-move cutoffs wastes time, for reasons that remain obscure, instead of saving it.
Robert P.
-
- Posts: 27809
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Null-move pruning and the hash table
Interesting. of course a hash-table hit telling you that the null move will fail high does no tnearly save you as much search effort as a hit telling you another move failed high, because of the reduction, and the fact that you would try null move first anyway. Even in the always-replace case, storing null-move fail highs would would tend to replace many much more valuable non-null fail highs, where it really would have helped to know what move caused the cutoff, even if the depth ws not sufficient. And most cutoffs in the tree are actually caused by null moves.micron wrote:To conclude, in my program the TT seems OK, NMP seems OK, but hashing null-move cutoffs wastes time, for reasons that remain obscure, instead of saving it.
Furthermore, if you get a hash miss on a move that will fail high by null move, the null move will be the first thing you try. And when the position was already searched, but just not stored, the node the null moves leads to will also have been searched, and it will have been stored. So you might have a hash hit on that, limiting the damage of the initial miss to only one node. While a hit on the position you now have not overwritten might save you hundreds of nodes later.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Null-move pruning and the hash table
What it means is that you have something broken. If you reach position P, and do a null-move, and it fails high, you should store it as fail high with draft=depth. If you reach position P elsewhere in the search, a hash hit should produce that same fail high, with _no_ search at all. That has to be faster. If it slows you down, you have other problems that need to be fixed...micron wrote:I gave up hashing with d = depth - R, because it is illogical.
I also gave up hashing with d = depth, because it invariably puts my node count up. Here's an extreme, but otherwise typical, example.Such results are not much affected by switching to an always-replace TT policy.Code: Select all
hash_null_move_cutoffs = NO Hash table statistics: 3.5E+6 probes 1.7E+6 hits (score used) 48% 8.3E+5 hits (move tried) 23% 9.8E+5 misses 28% Null-move pruning: 3.3E+5 null-moves 3.0E+5 cutoffs 90% hash_null_move_cutoffs = YES Hash table statistics: 1.5E+7 probes 7.7E+6 hits (score used) 50% 2.9E+6 hits (move tried) 18% 4.7E+6 misses 30% Null-move pruning 9.0E+5 null-moves 8.2E+5 cutoffs 91%
To conclude, in my program the TT seems OK, NMP seems OK, but hashing null-move cutoffs wastes time, for reasons that remain obscure, instead of saving it.
Robert P.