Daniel Shawul wrote:
I don't think so. If you use brute force and search all moves, move ordering doesn't matter.
AFAIK most modern top engines use move count based LMR/LMP, so move ordering really _does_ matter.
Regarding the original subject: I've sometimes dreamed of dynamically adjusting PSTs, but unfortunately chess seems to be a really simple game. Knight just is more better placed on e5 than on f3 unless tactical reasons dictate otherwise. And tactical reasons vary a lot depending on which part of the search tree you happen to be.
I think an only reasonable chance to modify PSTs during the game is to modify them using statistical data based on fx. material balance. However achieving even this goal seems like climbing on Mount Everest.
Daniel Shawul wrote:If you have equal scores , none is better than the other We just happen to pick the one that we searched first . Multi-pv will say both are equal. Also this case is very rare, no ?
No. You have only one score - for the first move, produced by very thorough search (full window pv search). The rest does not have a score, just a vague "proof" that none of them scores better than the first, usually produced by a very inaccurate search (zero window scout search). Multipv can often show higher score for the second move.
AFAIK most modern top engines use move count based LMR/LMP, so move ordering really _does_ matter.
At first I understood it as the kind of pruning made by looking at the eval, when it was originally posted.
For a timed-search (not fixed depth) , we are making the _presumption_ that we can search one more iteration !.
If we can't (like in the case of fixed depth), then pruning them will only introduce errors...
The fact that effecitive branching factors is close to 2 or lower may have contributed to their success now and not in the past ..
For me it only worked for depth <= 2 && nmoves <= 24. In scorpio 2.7 nmoves <= 8 proved to score better making the engine more tactical ? waiting for results..
Well, for this discussion, score == eval() (depth=0 search). So if the eval is the same for both moves,
I am saying it doesn't matter which move we pick. The multi-pv thing came in as an example to this indifference in selection.
Also all this discussion is only for sequential search, for parallel search any of the equally good moves can be selected...
Uri Blass wrote:I believe that there is no reason to be afraid of search instabilities.
The search of stockfish is clearly unstable and I can often see fail high and after it a fail low.
I agree with you on this. LMR pretty much guarantees search instabilities, but even without LMR the search is full of these. For example accepting a hash table score from a different depth search.
It's silly that the search instability idea was used almost like a proof that the idea is no good. (It may be no good, but not for this reason.)
I did not intend for this thread to be restricted to this single idea of using the history to modify piece square tables. There is the possibility of MCTS (Monte Carlo Tree Search) or some hybrid of it. I feel that we are not always using our imaginations.
Search instability and path dependency is just one problem I see.
The idea is also questionable in the first place because history idea has been somewhat outdated even for move ordering.
The statistics in there is too random for big search depths...
Now you want to spoil your evaluation with it too ? I still use it for move ordering though, from lack of other options.
It's silly that the search instability idea was used almost like a proof that the idea is no good. (It may be no good, but not for this reason.)
Wrong. History information is unreliable in the first place. LMR gains you a lot more than history does. The search instability is acceptable there like null move's is.
Now you pick on something that is very doubtful, add a search instablility like i never seen before, and expect it to work. Don't be surprized if some people are pessimistic about it.