matthewlai wrote:That is one aspect, but I believe the more important one is that null moves are guaranteed to be worse except in zugzwang. There's not really a way to prove that, though.
Worse than what? Many of the non-captures (if not most) will be worse than a null move. Because the null move is also guaranteed to give nothing away. Many non-captures go to unsafe squares, compromise protection, unblock good captures... And then there are those that un-develop pieces. What you say is tautologous, because it is how zugzwag is defined, but in practice the null move is just a good representative of one of your best quiet moves.
Yes, at this point it becomes a heuristic. Whether the save in time is worth the occasional error will require testing to find out.
It's not about searching only the optimal moves. It's about searching them later.
Well, it was about reducing them more.
Yes, but there is still one optimal move in this position. And the move ordering is based on our guess of which one is best. The knowledge or assumption that this node is an all-node doesn't really affect move ordering.
It depends on how you define 'best'. I would say that has nothing to do with score. Order is all about high probability to do the job in a cheap way. Reducion is about how hard you expect it to be to reach the reliability you are aiming for.
Yes, totally. If you have already searched PxQ, the probability of RxQ goes up. However, if you can search RxQ first with the low prior probability (based on the assumption that PxQ is probably better), and get a cut-off, that will save a lot of time, and be right most of the time. Whether that will be an overall win or not would require testing.
That still means you would now have to re-search RxQ at the lower reduction if a posteriori you see that PxQ was no good.
Also, isn't it a bit strange that we only re-search if a node unexpectedly fails high? When we re-search we are considering the possibility that it won't fail high with a deeper search. But why do we trust the result when a move fails low as expected? Why don't we also re-search that to consider the possibility that it may unexpectedly fail high with a deeper search?
When things are as expected you will require less stringent proof that when they are unexpected. That is just Bayesian statistics. If the prior probability is low, weak evidence that the move fails low is enough to push the likelihood that the move fails low to the desired level. When the prior probability of a fail high was very small, you need very strong evidence to push it to the desired level.