Q-search is not negligible. It does work, from calling Evaluate() to attempting to generate capture moves. Etc. Even the procedure call overhead for setting up the call instruction is not free...
One can even eliminate the evaluate() call all in all if we just take material only as in futility pruning. Also move generation is not called before evaluate (we test for standing pat first through eval() call).
When I converted to iterative search from recursive search, I didn't gain any speed gain despite my "misinformed" expectations.
Maybe setting up the stack in recursive searchers can be significant but i doubt it.
As far as "delta pruning" + lazy eval, I have always done both, until the MVV/LVA change last year. And both helped. Significantly.
It may be because one is used weekly (not aggressive enough). So what I propose is to use an unsophisticated lazy eval , one with material only done outside eval, with the same margin as futility. Hence both search exactly the same tree except that futility do it early to save some time. How significant will that be ? If two prunings that do not complement each other (fail high reductions & null move) are used together and give a better result, it implies one of them is not done aggressively enough.
I don't agree with your last statement. pruning vs reduction is night and day. With pruning, you have no chance to discover the move is actually good. With reductions, you do have a chance, albeit with a reduced-depth search. But there is hope.
They do complement each other in ways. For example, I ran a test a good while back to compare futility (last 4 plies in Crafty) vs reductions. They seemed to add about equal Elo, taken one at a time. But both added about 1.5x what either one added, sort of like the null-move vs reductions test I published here a couple of years ago. That suggests that they do have some overlap (since both are not 2x as good as either one individually) but it also suggests that there is also a complementary component since either gains over the other by itself.
I did a quick test with a big margin and it did badly. When I investigated why, it
seems the margin should be lower (not higher!) for a safer version. What was happening
was with a margin of 1000 all moves started failing high and qsearch is almost never done...
So a better version is using a factored (reduced) see score something like the following
I don't agree with your last statement. pruning vs reduction is night and day. With pruning, you have no chance to discover the move is actually good. With reductions, you do have a chance, albeit with a reduced-depth search. But there is hope.
They do complement each other in ways. For example, I ran a test a good while back to compare futility (last 4 plies in Crafty) vs reductions. They seemed to add about equal Elo, taken one at a time. But both added about 1.5x what either one added, sort of like the null-move vs reductions test I published here a couple of years ago. That suggests that they do have some overlap (since both are not 2x as good as either one individually) but it also suggests that there is also a complementary component since either gains over the other by itself.
Maybe I brought up a bad example which was from the top of my head. If we change it to Null move _reductions_ (read somewhere that there is such thing) vs Fail high reductions, then that will be redundant I think. The null move reduction is obviously more accurate with its reduced depth search, so I can't say there is a complete overlap, but there is definitely redundancy.
However in the case of futility vs lazy_eval both in qsearch, one can set both of them to produce the same tree.
I don't agree with your last statement. pruning vs reduction is night and day. With pruning, you have no chance to discover the move is actually good. With reductions, you do have a chance, albeit with a reduced-depth search. But there is hope.
They do complement each other in ways. For example, I ran a test a good while back to compare futility (last 4 plies in Crafty) vs reductions. They seemed to add about equal Elo, taken one at a time. But both added about 1.5x what either one added, sort of like the null-move vs reductions test I published here a couple of years ago. That suggests that they do have some overlap (since both are not 2x as good as either one individually) but it also suggests that there is also a complementary component since either gains over the other by itself.
Maybe I brought up a bad example which was from the top of my head. If we change it to Null move _reductions_ (read somewhere that there is such thing) vs Fail high reductions, then that will be redundant I think. The null move reduction is obviously more accurate with its reduced depth search, so I can't say there is a complete overlap, but there is definitely redundancy.
However in the case of futility vs lazy_eval both in qsearch, one can set both of them to produce the same tree.
I agree. The issue is one of performance. If you can avoid making the move and then recursively calling Quiesce() you save some effort. Which is the kind of thing most optimizations are about. Not huge. But they add up.
Lazy evaluation _can_ be done more accurately however. Bruce Moreland (Ferret) did a very exact one, where he knew exactly how much of a bonus or penalty each piece could contribute. So he could accurately determine if there was "no hope" in getting above alpha, or below beta.
In Crafty, I have more than one Lazy exit. The last one is the biggest savings because it cuts out the piece evaluations that have loops and such...