Naturally, if there is a hash move for a position, it would be searched first. I meant this to be used for move ordering of other moves.Daniel Mehrmann wrote:Howdy Markmjlef wrote:OK, here is my next screwy idea: Try to order the moves you search based not just on the likelihood of them being found best, but also based on how big a sub tree they will generate.
In cutoff nodes, many moves can often lead to a cutoff. So, instead of say searching a capture move which might cause an extension (like a recapture or check), delay searching those moves. I think for this to work you still need to use good move ordering. For example, searching winning captures first. But why search a capture that is also a check (or extended for other reasons) if a not-extended move might still fail high.
Has anyone played with this idea? I will start some tests and see if it works.
I thin an efficient search need not search the best moves. An efficient search will search "good enough" moves with small subtrees. What do you think?
Mark
Well, i don't played with your idea and i think your idea is already outdated with "modern" PVS searches. Let me explain my point of view.
"Modern" searches, like based on LMR/PHR, doing a lot of researches while the searchis running, or at at least they should do it.
However, if we entering a node again, we have a hashmove already from the previous search, which should lead to a cutoff.
If i'm looking back, good old times, your idea might giving a nice improvment in special cases of threads and pins.
Best,
Daniel
Here is what I tried last night:
Whenever a move causes a beta cutoff, make a score for a move (much like a history table) based on the number of subtree nodes divided by 2^depth (an estimate of the average subtree size assuming a branching factor of 2). I multiply this by a contstant for scaling, and subtract it from a bigger constant, to make a positive score (smaller subtree size) better. I added two history arrays, one for capture and one for non-captures. I take the existing value in the array, mutliply by 3, add the new value then divide by 4 (weighted average, to smooth out the values).
I left in place other move ordering which goes hash, winning captures by SEE (sub ordered by the history array for captures), equal captures (sub ordered by the the new history values), non captures (ordered by the new non-capture history values), then losing cpatures (ordered by how bad the loss is). I did not try and guess if the node would be a fail high one or not (if it is an all node them all this extra work would not be very helpful)
Results? Slightly better in testing against the otherwise identical program using standard ordering. Slightly better against the average results against 5 opponents (Fruit, etc). I only got in about 100 fast games, but to me this suggests the idea is not as totally worthless as most of my ideas.
A simpler scheme instead of all this node counting might be to just use it as a kind of tie criteria. If two moves both appear to win say a pawn, search the non-extending one first. The nono-extending one would typically have a smaller subtree (checks might be the expection, since there would be fewer replies).
A related idea I experimented a bit a month or so ago was for any move which would cause an extension, first search without the extension. If the extension was good enough to cause a beta cutoff, accept it (non-pv nodes, of course). If not, research with the extension added. I never did enough tests to see if this helped or not. My hope was say a winning capture check might be able tpo prove it was good enough without the need to extend... more testing soon.
Mark