For the purpose of this discussion, I am going to ignore various extensions and prunings for now, as well as things like LMR. This discussion is focused on doing searches fundamentally constrained by depth (which is what pretty much everyone is doing right now).

Given a root position, there's a theoretical game tree that includes all possible variations starting at this root position, going all the way to end of games. There's also 1 or more PVs in this theoretical game tree. When we do a search, we are essentially trying to see as much of this theoretical PV as possible.

We do that by constructing a search tree within this huge theoretical game tree, and hope that it includes as much of the theoretical PV as possible.

But which part of the giant theoretical game tree do we search? Well, it makes sense to search nodes that have the highest probability of being in the PV.

The root node has the highest probability (of 1), then the nodes 1 ply from the root node, etc. With no further information about the nodes, we assume that each children of a parent have the same probability of being in the PV, given that the parent is in the PV.

Since the tree obviously has a branching factor of greater than 1, in general, nodes closer to the root will have a higher probability of being in the theoretical PV.

That's why in depth-constrained searches, we look at all nodes less than a certain number of moves from the root position (for each iteration of ID).

That seems to make sense, but consider the situation where there is one node with 2 children. Child A is a much simpler position, with only 3 possible moves. Child B is a more complex position, with 9 possible moves.

With a depth-constrained search, we would be spending much more time on B than A. But does that really make sense?

Does it really make sense to search each of B's 9 children to the same depth as A's 3 children?

If we assume the probability of A and B being in the theoretical PV to be equal, the probability of each of B's children being in the PV is much lower than A's children, just because B has many more children. Wouldn't it make sense to search A's children deeper?

This is unless we want to assume that, in general, B has a higher probability of being in the PV because it has more children. I don't see how that can be argued.

Instead of depth-constrained searches, how about we do node-count constrained searches instead?

Code: Select all

```
search(position, nodeCount):
if (nodeCount < 1)
return qsearch(position)
--nodeCount
...
movelist = position.generateMoves()
for each move in movelist:
apply move
score = search(move, nodeCount / movelist.size)
...
...
```

What do they all have in common? They all force the search to spend more time on positions with less legal moves (or less reasonable moves).

In fact, I believe this is the most important effect of LMR. LMR makes searches spend much less time on positions with a lot of moves, than it would have otherwise, relative to positions with less reasonable moves.

This is one of the ideas I'm exploring with my Master's thesis, and I was just wondering what you guys think?

Note that I described the algorithm in terms of nodes above. I did that because I feel it's easier to understand that way. However, it can also be applied in a depth-based framework. The only thing that needs to be changed is the default depth reduction. Instead of reducing depth by 1 on every ply by default, it should be reduced by an amount based on how many legal moves there are for that position. But then "depth" is pretty much meaningless at this point, and it would be easier to just throw it out.