The apparent paradox that experience shows that a deeper tree-search yields better play, whereas mathematical investigation of the problem predicts less reliable results for deeper searches.
Can somebody explain (in simple words) the basic idea of Pathology on Game trees, what mathematical investigation that is, and why it does not apply in real game-trees, even with random evaluation?
I guess I need to read that paper some more times to understand
I read that paper, but for me it looks like there is something pathological with assumptions, or maybe I failed to understand sth?? For example look at assumption 2:
"each critical node has m children of the same sign and n children of the opposite sign"
This is not true in any real game tree I could imagine. If you make a good move gaining a winning position (for example winning a queen), then your chances to go wrong in subtree gets significantly lower. So in this specific sideline you see much less critical nodes and when you see them m is much higher than usual and n is lower than usual.
And on this fact are based many many pruning methods. Please correct me if I misunderstood the paper. I don't enjoy much reading over-abstract scientific papers
the paper maybe is not well defining its variables. m,n are not the same among all nodes. Thus they should be calld m(j), n(j), where j is the position of the node in the tree.
The only thing he says that move selection is only relevant in nodes where some moves win and some looses: m(j) > 0 && n(j) > 0.
The conclusion he has is rather trivial. Its that finding the moves to the best position in tree is less probable with high search depth. This is simply because there are more positions in trees with high search depth and thus a higher chance to miss it.
I think it´s hard to argue on a mathematical way why larger search depth leads to better decisions. The first thing to discuss is what is "better". If you have a perfect algorithm all moves that leads to a win are equally good. In real life a position with only one sequence of moves that wins is much worse to a position with many winning sequences.
In my opinion a chess program doesn´t find best moves but moves with many chances to win. If there is no move sequence in search range with a clear stable advantage like winning material the search will statistically maximize the amount of good move sequences.
Thus if you have no tactics to see, search at increasing depth will lead to a better chance to find positions with many options.
Mangar wrote:I think it´s hard to argue on a mathematical way why larger search depth leads to better decisions.
Well, if you manage to search to the end of the tree, you'll find the true minimax value of the current position and a move that realizes it. So deep-enough search leads to perfect decisions. If your evaluation function has the feature that it becomes a better predictor of the outcome of the game as you get closer to the end, then I think you can actually prove that deeper searches lead to better decisions.
I haven't read the article about pathology in game trees in a long time, but it didn't seem very relevant when I read it.
Mangar wrote:The conclusion he has is rather trivial. Its that finding the moves to the best position in tree is less probable with high search depth. This is simply because there are more positions in trees with high search depth and thus a higher chance to miss it.
But that is a totally useless conclusion. Who cares if you find the best position, if there are billions of positions with a score that is only infinitesimally lower? None of the scores are any good anyway, (if you have not search all the way to mate), and will change on deeper search. Even in quiet positions a 1-ply search hardly ever produces the same score as the current evaluation.
hgm wrote:But that is a totally useless conclusion. Who cares if you find the best position, if there are billions of positions with a score that is only infinitesimally lower? None of the scores are any good anyway, (if you have not search all the way to mate), and will change on deeper search. Even in quiet positions a 1-ply search hardly ever produces the same score as the current evaluation.
AlvaroBegue wrote:
Well, if you manage to search to the end of the tree, you'll find the true minimax value of the current position and a move that realizes it. So deep-enough search leads to perfect decisions.
That´s why I exclude "tactics". If the position gets simpler to evaluate in higher search depth (for example with large material imbalance) this is true. Sadly most balanced positions with few material (i.e. endgame posistion) are very hard to evaluate too.
I like the discussion about "why does deeper search leads to a better move selection" because I hope that once I´ve got a better understanding maybe I´ll be able to find a general way to reduce search depth that replaces nullmove, lmr, ... by using a propability approche. But I haven´t got any usable idear so far.
It´s a long time ago, that I wrote a "connect-5" program played on a go-board. The side first having a row or diagonal with 5 pieces wins.
As I haven´t got a good understanding how to eval positions I simply counted the amount of smaller rows a side allready had.
In this implementation a search with depth 5 didn´t play better than a search with depth 3. When I found an rule - only implementation (without search) that won 100% agains my depth 5 search I gave up
Because of this experience I am sure, that getting a better result in deeper seraches highly depends on eval. In chess an eval that only counts material is allready good enough to archive this. In other games this is sometimes not that easy.