matthewlai wrote:bob wrote:
I think that a simple example from the past shows why this doesn't work very well. At one point Harry Nelson noticed that in Cray Blitz, the node count for each root node (total sub-tree, not just number of siblings) contained useful information. For example, you have a best move M, but once you reach the final iteration move P will replace it as the best move. Harry noticed that for each successive iteration, the node count for the sub-tree below move P increased more than the node count for other moves (> than the expected value based on branching factor). So I wrote code to sort the move list after each iteration based on this. And it was an improvement.
That sounds like a useful heuristic. I'm not surprised it works. When a child has a larger sub-tree, that usually means something interesting is happening there (PV changed), so ordering moves based on that makes sense.
I don't really see how that's related to the discussion here, though.
Just because a move has fewer siblings doesn't say a thing about how important each of those siblings might be. I can't see any reasonable justification for searching each of 3 siblings more deeply and you do when a position has 9 siblings, other than the position is somehow more forced. Singular extensions is an idea that has merit, but it doesn't extend the search on a move just because there is one move, it extends when there is one good move and the rest are bad, regardless of whether or not there are 2 others or 20 others to choose from. The positions with the fewest moves are already generally searched more deeply, because such positions have the side on move in check. Positions with few GOOD moves are much more difficult to recognize, however. And when you throw in pseudo-legal moves, this becomes even more problematic.
This is not saying the idea won't work. But evidence suggests that the only way to recognize things that need to be extended is by searching them, not by some static criterion such as "< N siblings vs > N siblings"..
Pruning, reductions, and extensions can still be done using a node-count based approach, by not allocating nodes equally. That's not any more difficult or easier than figuring out what to reduce and extend in a depth-based framework, since all these "extension" and "reduction/pruning" decisions still need to be made statically before searching (TT, killers, SEE, etc). Or they can be done after a first pass, and that can be done with a node-count based approach as well (re-searches, etc).
So here we are talking about "default" depth reductions (always 1 in most modern engines). This is our starting point for how deeply to search each move, before any heuristic is applied. If we have other information (TT, killers, SEE, history, and whatnot), we can tweak each individual depth/node count using that.
If we have no prior information about the moves, the more moves there are, the lower the chance of each one being the best is.
Checking is not the only way to drastically reduce number of moves. Trading queens can drastically reduce number of moves as well, for example.
If we have a root position with 2 legal moves - whether to trade queens or not.
Using a uniform-depth approach, we would be spending much more time on the non-trading sub-tree, even though absent any other information, we don't know that's more likely going to happen.
This would also apply to open vs closed games for example. If you have the option of keeping the game closed vs trading something to open it, does it make sense to spend much more time on the opening subtree, even though keeping it closed is equally likely to be the better option?
Pseudo-legal moves would make it problematic, yes, but that's just an implementation detail. I switched to legal move generation in my engine because when your evaluator is a huge neural network that brings your NPS down to 40k, the legal move generation overhead is negligible
.