Sorry, but this is not "theoretically correct". And somehow we are not talking toward the same point. At any node in the tree, you could reach that point with alpha = beta-1 or with alpha < beta-1, which flags this as non-PV or PV respectively. In one case you reduce/prune less (non-PV) which makes little sense. Why are you searching this path in the first place? Hoping to find a better move? That's my intent. If you have a score of 0.0 for several iterations and then find a move that jumps to +1.0, do you stop searching all other moves completely and just search that one deeper and deeper to verify it won't come crashing down, or do you continue to search other moves hoping for a still better one? If the latter, how does pruning/reducing them more aggressively make any sense, it just makes it HARDER for said node to expose the tactics that require a deeper search to see.Michel wrote:What I am trying to say is that the (potential) PV descendant node _is_ different since you are now in a research (the scout search for the parent non-PV move failed high). So you may choose to do the research with less pruning to validate the fail high.The PV descendant of a non-PV node is just the most plausible refutation of the previous move. The logic applied there is no different than the logic at the root. You may choose to treat the most plausible move differently from others.
What I am saying is really "theoretically correct" (see my first post in this thread)! Not pruning in PV nodes may compensate for more aggressive pruning in non PV nodes through the PVS mechanism in such a way that the tree value does not change.
Since researches take time, in reality it is a trade off.
So A recap, more detailed. I am going to use the traditional "left-to-right" search order, which means the left-hand edge of the tree represents the first move searched at each ply, and each node on that edge is a PV node by Knuth/Moore's definition.
So, to start this off, let's search that left-hand edge and we are very cautious about reductions and pruning, generally doing none since this should be a critical path through the tree. Suppose we are going 20 plies deep. At ply=19 we make a move (left-hand-edge, PV node) that takes us to ply=20. Since this is also a PV node we search it carefully. Now we back up to ply 19 and try another move, and when we get to depth 20 again, we are NOT in a PV node, and we reduce more aggressively. The question is, what makes this node any different than the first one at this ply? Are you CERTAIN the first move you searched was best and the move leading to this node is worse? Certain enough to not search it at all?
Once you back up to the root with a score for the first move, which you are now apparently convinced is the best move (something that is only true 4 out of 5 times or so) so you now search the rest of the moves at ply=1, allowing more aggressive reductions and pruning along those pathways than what you allowed along the "best move" path. Is this the right way to discover that this is one of those 18% of the times where the first move is not best and you will change to something else?
So, we are at an interesting point. The first move is not best, but we don't know this until we search all the remaining moves. But we search them less rigorously than we searched the best move. Is this the way to discover a deep tactic, which is why we are doing this search in the first place? What if the remaining moves all fail low because they are searched less carefully and we overlook something better? What if this move fails high due to the aggressive reductions/pruning, but when we re-search using the same guidelines we used for the original best move, it now fails low? Is it better than the original best move? Or do we throw it away?
LMR is something that makes sense at any node. The more moves you search without failing high, the more convinced you should become that no moves will fail high and we are just wasting time searching them. We reduce 'em, reducing 'em more and more as we get farther into the move list, and we introduce error since move ordering can not be perfect. We lose Elo. But then we are able to search deeper because of these reductions, which gains Elo. And the net effect is that the error is more than offset by the depth gain and we see a positive improvement in playing skill. But why does it make sense to reduce moves at some nodes more or less than similarly-ordered moves at other nodes? This is the question I have repeatedly asked. And I am not seeing an answer that addresses this. I want my search to behave the same way, and find the same best move, no matter whether the best move is ordered first or second. Yet this approach will not do that.
I firmly believe, until I see some convincing example or explanation, that treating moves differently based on their order (reductions or pruning) is reasonable so long as they are treated the same no matter where they occur in the tree. Not differently just because they appear in different paths, as opposed to where they appear within a single node.