I don't think either works. With the reductions, null-move search, forward pruning, etc, the node counts almost are meaningless. I spent a couple of months working on that issue a while back, and since I have the old Cray Blitz source running, I compared it to crafty. CB had a rational node count that made ordering useful. Crafty's node counts are almost random numbers. A move that will become best next iteration might have a big node count this iteration, a small node count, or an average node count. I could not find any correlation at all.sje wrote:I've tried using subtree node counts as an ordering heuristic, and while it may have helped slightly, I recall seeing lots of oscillation from one iteration to the next which probably wasn't all that good.
With high resolution CPU usage clock system calls available nowadays, perhaps using processor usage instead of node counts might be a better approach.
I was trying to take Hsu's "panic time" concept and come up with a working approach that recognized when moves were "hard" or "easy" before they failed low (or high). Couldn't find anything dealing with node counts that worked at all, and I tried a hundred different approaches at least...