Adjustable search pruning depending on time control

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Adjustable search pruning depending on time control

Post by Michel »

It is not unreasonable to assume that null move and lmr would be dependent since they are both methods to get a quick fail low on a move (as is futility).
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

Michel wrote:It is not unreasonable to assume that null move and lmr would be dependent since they are both methods to get a quick fail low on a move (as is futility).
Yes, it is just a question of degree. Since one is based on the score and the other on the movecount, I didn't think they would be all that closely correlated, Probably I was wrong about that. It now appears that they do have considerable interdependence, but it seems that our current values are fairly good ones for both terms.
Harald
Posts: 317
Joined: Thu Mar 09, 2006 1:07 am

Re: Adjustable search pruning depending on time control

Post by Harald »

lkaufman wrote:
Michel wrote:It is not unreasonable to assume that null move and lmr would be dependent since they are both methods to get a quick fail low on a move (as is futility).
Yes, it is just a question of degree. Since one is based on the score and the other on the movecount, I didn't think they would be all that closely correlated, Probably I was wrong about that. It now appears that they do have considerable interdependence, but it seems that our current values are fairly good ones for both terms.
The null move represents a good or neutral move and
the moves for late move reductions are ordered good - neutral - bad
where bad moves easily destroy the position.
Perhaps this is a connection between these methods
and the reduction for null moves should be similar to the reductions
of the first (late) moves. Has anybody tested this?

Harald
User avatar
Eelco de Groot
Posts: 4563
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: Adjustable search pruning depending on time control

Post by Eelco de Groot »

Harald wrote:
lkaufman wrote:
Michel wrote:It is not unreasonable to assume that null move and lmr would be dependent since they are both methods to get a quick fail low on a move (as is futility).
Yes, it is just a question of degree. Since one is based on the score and the other on the movecount, I didn't think they would be all that closely correlated, Probably I was wrong about that. It now appears that they do have considerable interdependence, but it seems that our current values are fairly good ones for both terms.
The null move represents a good or neutral move and
the moves for late move reductions are ordered good - neutral - bad
where bad moves easily destroy the position.
Perhaps this is a connection between these methods
and the reduction for null moves should be similar to the reductions
of the first (late) moves. Has anybody tested this?

Harald
The connection that Sam has found and that has the result that you probably should tune the null move reduction and LMR reduction together is, as far as I can see, probably because after a successful null move, when the null move failed high, you have to take into account that the opponent's moves have been reduced always both ways, first by the null move reduction, then LMR reduction. All his moves failed low, but they were done with a nullmove + LMR reduced search depth only. The opponent had the advantage of doing two moves after one another and still Failed Low but you have to take into account the reduction of the searches was at least twofold (It could have been further reduced later/deeper in the null window search ).

Regarding Stockfish again, there is also LMR pruning in CUT nodes, whenever you could not do null move there, but this mainly functions as a secondary IID. If all goes well you will find the hash-result of the last time the ttMove failed high and the depth of that hit will be sufficient to cross the LMR phase directly, without actual searching. Even if there is no direct hash hit, the LMR reduced search may be successful quickly if it finds a hash hit a bit deeper. After the LMR phase, there will always be a re-search with full depth if there was a beta cut off.

I think that is part of the reason why in a CUT node the LMR reduction is a bit arbitrary (for Stockfsh, or Rainbow Serpent with even the d² reductions :) ), it only makes sense to reduce a bit less there than in the IID phase, or the LMR is completely useless there (apart from the fact you are now looking for the TT result after having done a move). In ALL nodes it is different of course, the LMR reduction is now real, as long as there is no flipflopping to a CUT node, with associated re-searches to full depth.

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: Adjustable search pruning depending on time control

Post by lucasart »

lkaufman wrote: It does not seem natural at all to me to express reduction as a percentage of depth. If we assume a branching factor of 2, a reduction of four plies costs only 6% of a normal search (assuming no research needed). There is not that much more to gain by reducing further, while the chance of missing a good move will go up significantly with each extra reduction. Still, I can see an argument for some mild increase in reduction with depth, I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
I did quite a bit of experiment in my engine, and I've never found any evidence that increasing the reduction with depth helps.

I don't know why Stockfish does this. Perhaps they got an elo increase from it, because they were comparing apples and pears: IOW the previous version had an overly conservative LMR, or didn't have the right safeguard conditions. And as a corollary, I reckon that half plies and the likes are useless too.

I also experimented with the null move reduction, comparing R=3 with R=3+depth/4. And I've never managed to see if one version is better than the other. I went for 3+depth/4, mainly because Stockfish does it. So I thought: if they do it, there must be a reason...

Intuitively though, I would want to think that the "depth diminishing return" principle means that LMR reduction should increase with depth, and so should Null Move reduction. But so many things are intuitively right, and empirically wrong, that it's best to test, and only trust empirical results.

But then again, what works in an engine may not work in another, and vice versa. For example, I tried the Stockfish singular extension, and I've never been able to measure either a gain, or a regression, so I removed it. Yet Stockfish got +40 elo from that IIRC !
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.