I thought I might give an update here on what I tried and what stuck.
So I tried the idea first of not searching nodes further that are already hopeless below alpha. Close to the leafs it worked well and with increasing margins (e.g. futility_margin = 70 * depth) it also worked for higher depths. I wrote a version that would keep track of false-positives (my pruning would have skipped the move but actually it would have raised alpha) and realized that if I don't use this pruning when either side's king is in check I could reduce the rate of false-positives way below one percent. At that point it provided a nice speed increase that started to over-compensate for the loss of ELO through accuracy. I gained about 40 ELO.
Code: Select all
bool tactical = position.IsChecked(position.SideToMove) || child.IsChecked(child.SideToMove);
//some moves are hopeless and can be skipped without deeper evaluation
if (depth <= 4 && !tactical)
{
int futilityMargin = (int)color * depth * MAX_GAIN_PER_PLY;
if (window.FailLow(child.Score + futilityMargin, color))
continue;
}
This was a first win. But at this time the cost of evaluating a position was becoming a hot path in the overall performance of the program. I was already updating the zobrist hash (used for the transposition table) in an iterative fashion and so it didn't involve much extra work to also update the evaluation iteratively. After all it's just based on PSTs. Implementing that didn't hurt the simplicity of the program but made it a litte faster. (+10 ELO)
Encouraged by the fact that pruning a few moves that had the potential to raise alpha didn't apparently hurt playing strength too much I looked into the portion of my code that does the PVS search. The idea was that for every node after the first I would assume that the move would probably fail low and search it with a tiny window around alpha.
Code: Select all
int score = EvalPositionTT(child, depth - 1, window.GetLowerBound(color));
if (window.FailLow(score, color))
continue;
This was the original version which is really a lossless optimization but now I tried evaluating the child positions at reduced depth. And again I looked into the nature of false positives (positive = you can safely prune) and it turns out that it makes sense to also include moves involving check here. Then I realized that I was in the process of reinventing late move reductions because I was already skipping the PV move I was experimenting with skipping a few more moves before starting to reduce. If the reduced search around alpha doesn't fail low it's searched at full depth just like before.
Currently the code looks like this:
Code: Select all
if (depth >= 2)
{
//non-tactical late moves are searched at a reduced depth to make this test even faster!
int R = (tactical || expandedNodes < 4) ? 0 : 2;
int score = EvalPositionTT(child, depth - R - 1, window.GetLowerBound(color));
if (window.FailLow(score, color))
continue;
}
This reduced the branching factor massively but when I searched a test-suite of positions to the same depth I solved quite a few less puzzles. So the increased depth is paid for with a significant loss of accuracy. Still it was worth about 30 ELO. If I limit the search by time not depth it also solves more puzzles.
And now that I basically had LMR implemented I remembered reading on this forum that there was a synergy between good move ordering and late move reductions. Up to this point I wasn't sorting quiet moves at all. And so I implemented a
simple history heuristic.
When a node fails to raise alpha I record it as 'bad' when it raises 'beta' I record it as 'good'. And I use the ratio of the two for sorting the quiet moves in a naive fashion. (One line of code, making use of the build in Sort functionality of C# lists) Even though the NPS of my engine suffered the branching factor improved enough to give me another +50 ELO. Without late move reductions such a brute force approach to sorting the quiet moves didn't improve the search more than it cost in terms of performance. That features can't be measured in isolation and are only effective in combination with each other makes engine development just a little more tricky! (and drives up the time required for proper testing)
Before I made the above described changes I had no search reductions or extensions besides null move pruning. Now the engine is pruning and reducing quite aggressively and I wonder how that changes it's playstyle (personally, I can't judge it. If there's another Zatour, let me know Amanj! I always very much appreciated your qualitative reviews of my engines play!) but it definitely made it stronger in engine vs engine matches. I think the current dev version of Minimal Chess is probably playing at around 2500 CCRL Elo.