I have been testing NMP with Shen Yu. Currently, my implementation looks something like this:
Code: Select all
if !in_check && !is_pv && eval >= beta && !IS_ROOT && !self.board.is_kp() {
self.board.make_nullmove();
let reduction = 3 + depth / 6;
let score = if depth > reduction {
let mut new_pvline = PVLine::new();
self.alphabeta::<false>(
depth - 1 - reduction,
ply + 1,
-beta,
-beta + 1,
&mut new_pvline,
)
} else {
self.quiesce(0, -beta, -beta + 1)
};
self.board.unmake_nullmove();
if self.timer.stopped {
return 0;
}
if score >= beta {
return beta;
}
}
Additional information:
I have not implemented any other forms of pruning, other than SEE pruning in QSearch. Currently, I have staged move generation implemented as such:
1. Test the TT move for legality, then play it
2. Generate captures. Sort them using MVV-LVA, and use SEE to filter out "losing captures," which are tested in stage 4. Winning captures are played.
3. Test Killer moves for legality
4. Play all losing captures
5. Generate quiet moves, then sort them by history heuristic.
Since switching to make/unmake board representation and legal move generation, I have incurred a small speed hit (ShenYu 1.0.0 searches at about 5.4 MNps, whereas the newer version usually hits about 4.8 MNps), possibly due to inefficient implementation. For now, this is not particularly problematic for me because the code is cleaner, and my eventual goal is to switch to a (self-trained) NNUE for evaluation, so the small inefficiencies that I am currently seeing are probably not too much to worry about.